CN112750137B - Liver tumor segmentation method and system based on deep learning - Google Patents

Liver tumor segmentation method and system based on deep learning Download PDF

Info

Publication number
CN112750137B
CN112750137B CN202110049310.2A CN202110049310A CN112750137B CN 112750137 B CN112750137 B CN 112750137B CN 202110049310 A CN202110049310 A CN 202110049310A CN 112750137 B CN112750137 B CN 112750137B
Authority
CN
China
Prior art keywords
convolution
network model
feature
image
liver tumor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110049310.2A
Other languages
Chinese (zh)
Other versions
CN112750137A (en
Inventor
肖志勇
刘一鸣
柴志雷
周锋盛
丁炎
张雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Wuxi Peoples Hospital
Original Assignee
Jiangnan University
Wuxi Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University, Wuxi Peoples Hospital filed Critical Jiangnan University
Priority to CN202110049310.2A priority Critical patent/CN112750137B/en
Publication of CN112750137A publication Critical patent/CN112750137A/en
Application granted granted Critical
Publication of CN112750137B publication Critical patent/CN112750137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a liver tumor segmentation method and a liver tumor segmentation system based on deep learning, wherein the liver tumor segmentation method comprises the following steps: preprocessing the collected data set; building a network model according to the preprocessed data, wherein the network model comprises a plurality of lower convolution layers and a plurality of upper convolution layers, the lower convolution layers are connected to the upper convolution layers through jumps, and feature graphs obtained by the lower convolution layers are connected with the upper convolution layers through attention modules on the jump connection; inputting the preprocessed data into the network model for training to obtain an optimal network model; and dividing the CT image to be processed by using the optimal network model to obtain a liver tumor area. The invention is beneficial to reducing the fault segmentation and obtaining higher precision.

Description

Liver tumor segmentation method and system based on deep learning
Technical Field
The invention relates to the technical field of medical image processing and application, in particular to a liver tumor segmentation method and system based on deep learning.
Background
In recent years, with the development of the modern society, CT/MRI imaging has been widely used in the field of medical imaging. Physicians often need to manually segment the lesion area in the CT/MRI images before making a diagnosis in order to assist in subsequent surgical planning and tumor therapy assessment. However, this manual segmentation method is not only time-consuming and labor-consuming, but also is susceptible to segmentation differences resulting from subjective judgment by a physician. Therefore, the automatic segmentation of the liver tumor in the CT/MRI image is realized by using a computer algorithm, the workload of doctors can be greatly reduced, and meanwhile, the accurate and repeatable liver tumor segmentation service is improved, so that the doctors are assisted to carry out subsequent diagnosis.
Early traditional medical image segmentation methods mainly include thresholding, active contour, region growing and level set. Then, with the progress of facilities and technologies, the segmentation methods based on machine learning include support vector machines and clustering methods. However, compared with a method based on deep learning, the traditional medical image segmentation method belongs to a semi-automatic segmentation method, for example, the quality of region growth is determined by the selection of an initial point and the formulation of a growth rule, and therefore, a proper interval is often needed to be obtained through multiple attempts to carry out boundary fitting through an algorithm, which is very dependent on the experience of doctors and multiple attempts. Also, machine learning segmentation algorithms require manual design and selection of liver lesion features, which require a corresponding associated expertise of the researcher. From this, it is known that the conventional segmentation method has problems of poor timeliness, general purpose type and accuracy.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the problems of poor timeliness, general purpose and accuracy of the traditional segmentation method when the positions of tumors in different patients in CT images are changeable and the shapes and sizes are different in the prior art, so as to provide the liver tumor segmentation method and system based on deep learning, which are high in timeliness, general purpose and accuracy.
In order to solve the technical problems, the liver tumor segmentation method based on deep learning provided by the invention comprises the following steps: step S1: preprocessing the collected data set; step S2: building a network model according to the preprocessed data, wherein the network model comprises a plurality of lower convolution layers and a plurality of upper convolution layers, the lower convolution layers are connected to the upper convolution layers through jumps, and feature graphs obtained by the lower convolution layers are connected with the upper convolution layers through attention modules on the jump connection; step S3: inputting the preprocessed data into the network model for training to obtain an optimal network model; step S4: and dividing the CT image to be processed by using the optimal network model to obtain a liver tumor area.
In one embodiment of the invention, a method of preprocessing a collected data set: finding the beginning and ending positions of the liver region, limiting HU values of CT images in the dataset to a specified range, and carrying out standardization processing on the CT images; and performing slice processing on the CT image.
In one embodiment of the invention, the lower convolution layer comprises a plurality of convolution layers, a normalization layer, a modified linear element, and a max pooling layer.
In one embodiment of the invention, the upper convolution layer includes a plurality of convolution layers, a normalization layer, a correction linearity unit, and an up-sampling layer.
In one embodiment of the present invention, the attention module obtains geometric information of the liver tumor through deformable convolution, and processes the feature map obtained after the convolution to obtain an output feature map.
In one embodiment of the present invention, the method for processing the feature map obtained after convolution includes: and changing the jump connection into a plurality of new feature graphs through convolution, acquiring advanced information of image boundaries, calculating to obtain a feature probability graph, weighting the feature probability graph, and summing the weighted result and the feature graph obtained through the deformable convolution module to obtain an output feature graph.
In one embodiment of the invention, the network model further comprises a deep supervision module.
In one embodiment of the present invention, the depth supervision module is: after each deconvolution stage is finished, the obtained feature map is up-sampled to the original size according to the corresponding scaling factor, the loss calculation of the stage is carried out, and then the loss of each time is distributed with corresponding weight for summation.
In one embodiment of the present invention, the method for inputting the preprocessed data into the network model for training comprises: firstly inputting data into a built network model, training the network model through forward propagation, outputting a predicted probability map through a softmax classifier, and selecting a loss function; according to the calculation error obtained by the loss function, back propagation is carried out, and the value of the parameter in the network model is updated; repeating the above process until the loss function value converges to the set range; and verifying the obtained network model to obtain the optimal network model.
The invention also provides a liver tumor segmentation system based on deep learning, which comprises the following steps: a preprocessing module for preprocessing the collected data set; the building module is used for building a network model according to the preprocessed data, wherein the network model comprises a plurality of lower convolution layers and a plurality of upper convolution layers, the lower convolution layers are connected to the upper convolution layers through jumps, and feature graphs obtained by the lower convolution layers are connected with the upper convolution layers through attention modules on the jump connection; the training module is used for inputting the preprocessed data into the network model for training to obtain an optimal network model; and the segmentation module is used for segmenting the CT image to be processed by utilizing the optimal network model to obtain a liver tumor region.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the liver tumor segmentation method based on deep learning, the deformable convolution is added in the jump connection process, and the effect of scale transformation is achieved by adjusting the shape of the convolution window, so that information of different scales is better learned, geometric deformation such as the shape, the size and the like of different objects is adapted, and the segmentation performance of a model is further improved. Meanwhile, depth supervision is added to strengthen the generalization capability of the network on the size change of different sections of the CT image, so that the fault segmentation is reduced; in addition, the automatic segmentation of liver tumors in CT images is realized, and compared with the existing mainstream method, higher precision can be obtained.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings, in which
FIG. 1 is a flow chart of a liver tumor segmentation method based on deep learning;
FIG. 2 is a schematic diagram of a network model of the present invention;
FIG. 3 is a schematic diagram of an attention module (DA) on a jump connection of the present invention;
FIG. 4 is a schematic diagram of the depth supervision module of the present invention.
Detailed Description
Example 1
As shown in fig. 1 and 2, the present embodiment provides a liver tumor segmentation method based on deep learning, which includes the following steps: step S1: preprocessing the collected data set; step S2: building a network model according to the preprocessed data, wherein the network model comprises a plurality of lower convolution layers and a plurality of upper convolution layers, the lower convolution layers are connected to the upper convolution layers through jumps, and feature graphs obtained by the lower convolution layers are connected with the upper convolution layers through attention modules on the jump connection; step S3: inputting the preprocessed data into the network model for training to obtain an optimal network model; step S4: and dividing the CT image to be processed by using the optimal network model to obtain a liver tumor area.
In the liver tumor segmentation method based on deep learning of this embodiment, in step S1, the collected data set is preprocessed to eliminate the interference of irrelevant organs; in the step S2, a network model is built according to the preprocessed data, wherein the network model includes a plurality of lower convolution layers and a plurality of upper convolution layers, the lower convolution layers are connected to the upper convolution layers through jumps, and feature maps obtained by the lower convolution layers are connected with the upper convolution through attention modules in the jump connection, and as deformable convolution is added in the jump connection process, each square of a convolution kernel can be changed in a telescopic manner, so that the scope of a receptive field is changed, the shape of a convolution window can be trained, the effect of scale transformation is achieved, and information of different scales can be better learned, so that geometric deformation such as the shape, the size and the like of different objects can be adapted; in the step S3, the preprocessed data is input into the network model for training to obtain an optimal network model, which is favorable for obtaining higher precision; in the step S4, the optimal network model is used to segment the CT image to be processed to obtain the liver tumor region, so as to realize automatic segmentation of the liver tumor in the CT image, and the whole method is simple, and has high timeliness, versatility and accuracy.
In the step S1, a method for preprocessing the collected data set is as follows: finding the beginning and ending positions of the liver region, limiting HU values of CT images in the dataset to a specified range, and carrying out standardization processing on the CT images; and performing slice processing on the CT image.
And the CT image is stored in a png format after being subjected to slicing treatment. Specifically, the original image set used in the present application contains several sets of CT image files in NII format. These images are three-dimensional, so the CT image of each patient is segmented into png format, considering that the model built is 2-dimensional.
The specific steps of preprocessing the data set are as follows:
Since the acquired CT images are 512 x z images, if the whole image is directly stored in a slicing way, a plurality of ineffective background images exist, so that the beginning and ending positions of liver regions need to be found before slicing, and 20 slicing positions are expanded outwards to ensure that all liver regions are contained; HU values of CT images are limited to [ -200,200], and are respectively standardized for facilitating training of the neural network, then slicing processing is started, and the slice processing is stored in a png format; and after the processing is finished, carrying out data enhancement on the processed image.
Wherein the form of the normalization process can be written as:
Where x i is the i-th sample, μ i and σ i are the variance and standard deviation, respectively, of the sample.
When the processed image is subjected to data enhancement, the data enhancement mainly comprises rotation, mirror image, elastic distortion and expansion, and the method can lead the image set to comprise various data of the same image at different angles and different scales, thereby increasing the number of the images in the image set. By expanding the image set, the over-fitting problem caused by too few image samples can be prevented.
In the step S2, the network model is built based on a U-Net network and a convolutional neural network CNN. The lower convolution layer comprises a plurality of convolution layers, a normalization layer, a correction linearity unit, and a maximum pooling layer. The convolution layer is a2 x 2 convolution with a step size of 2.
The upper convolution layer comprises a plurality of convolution layers, a normalization layer, a correction linear unit and an upper sampling layer. The convolution kernel size of the up-sampling layer is 2×2, and the convolution kernel size of the convolution layer is 3×3.
Specifically, the network model includes 4 lower convolutions and 4 upper convolutions, each lower convolution comprising two 3 x 3 convolution layers (Conv 2 d), a normalization layer (BatchNorm d), a modified linear unit (ReLU), and a max pooling layer (MaxPooling), the pooling layer employing a 2x 2 convolution with a step size of 2. Each up-convolution comprises two 3 x 3 convolution layers, a normalization layer, a modified linear unit and an up-sampling layer, the up-sampling layer convolution kernel size is 2x 2, and the convolution layer kernel size is 3 x 3.
Considering that the U-Net encoder is local operation and cannot integrate global information and simultaneously downsamples lost space information, the invention adds an attention module in the jump connection process, and the attention module comprises a deformable convolution module, so that the shape of a convolution window can be trained, the effect of scale transformation is achieved, and information of different scales can be better learned, thereby adapting to geometric deformation of shapes, sizes and the like of different objects. The deformable convolution module can enhance the spatial position information of the sampling image through additional output, and further helps to identify the boundary information of the liver tumor through the combination of global and local information on jump connection.
As shown in fig. 3, the attention module acquires geometric information of liver tumor through deformable convolution, and processes the feature map obtained after convolution to obtain an output feature map.
The method for processing the characteristic diagram obtained after convolution comprises the following steps: and changing the jump connection into a plurality of new feature graphs through convolution, acquiring advanced information of image boundaries, calculating to obtain a feature probability graph, weighting the feature probability graph, and summing the weighted result and the feature graph obtained through the deformable convolution module to obtain an output feature graph.
How the output feature map is derived is discussed in detail below:
firstly, the geometric information of the liver tumor is obtained through 3*3 deformable convolution, and the specific formula is as follows:
wherein C belongs to the output feature map, p 0 represents any position in the feature vector, p n E R represents any position in the convolution kernel, and w (p n) represents the weight value of the convolution kernel at the position p n.
Meanwhile, the feature map obtained after convolution is changed into three new feature maps A, G, T through 1*1 convolution on jump connection, the sizes of the three new feature maps are H, W, M=H, W are the total pixels of the feature map, and then A, G is subjected to dot multiplication to obtain high-level information of the image boundary; and then calculating the result through a softmax function to obtain a characteristic probability map:
where p represents the degree of influence of the mapping of the ith pixel point to the jth pixel point, the more characteristic of the two pixels, the stronger the correlation between them.
And then weighting the obtained p, namely multiplying the obtained p by T, so as to be beneficial to highlighting the distribution of important features in the image.
And finally, adding the weighted result with the feature map C obtained by the deformable convolution module to obtain a final output feature map.
The operation can be expressed by the following formula:
Where α represents a weight distribution value, and R represents a global feature map obtained by deformable convolution and a final output feature map obtained by addition of feature maps of the attention module.
As shown in fig. 4, the network model further includes a depth supervision (Deep Supervision) module to enhance the generalization ability of the network to the size variations of the CT image at different slices, thereby reducing erroneous segmentation, taking into account the size variations of the CT image at different slices. The deep supervision module is added in the up-sampling process of each layer, so that the model can strengthen the study of the middle layer and the lower layer of the network, and the shallow layer can be trained more fully, thereby improving the recognition capability of the network.
The depth supervision module is as follows: after each deconvolution stage is finished, the obtained feature map is up-sampled to the original size according to the corresponding scaling factor, the loss calculation of the stage is carried out, and then the loss of each stage is distributed with corresponding weight for summation, so that the size difference change of CT image slices can be avoided.
Specifically, after each deconvolution stage is finished, the obtained feature map is up-sampled to the original size according to the corresponding scaling factor to obtain a feature map (map) corresponding to the original size in the stage, then the loss calculation of the map and the original image true value (GT) is carried out, and finally the loss of each time is distributed with corresponding weight to carry out weighted summation, so that the problem of changeable size and shape of livers and tumors on CT images is solved to a certain extent.
In the step S3, the method for inputting the preprocessed data into the network model for training includes: firstly inputting data into a built network model, training the network model through forward propagation, outputting a predicted probability map through a softmax classifier, and selecting a loss function; according to the calculation error obtained by the loss function, back propagation is carried out, and the value of the parameter in the network model is updated; repeating the above process until the loss function value converges to the set range; and verifying the obtained network model to obtain the optimal network model.
Specifically, the method for inputting the preprocessed data into the network model for training comprises the following steps: step S31, firstly inputting data into a built network model, and training the network model through forward propagation; step S32, outputting a predicted probability map through a softmax classifier, and selecting a loss function; step S33, back propagation is carried out according to the calculation error obtained by the loss function, and the value of the parameter in the network model is updated; step S34: returning to the step S1 to start repeating the above process until the loss function value is converged to the set range; testing the obtained network model to obtain an optimal network model; step S35: and verifying the obtained network model to obtain the optimal network model.
Wherein, adopt cross-entopy and dice as the loss function, the formula of cross-entopy and dice is as follows:
lce+dice=Lms+Li
L i is a cross entropy loss function, where p is the prediction, t is the target, i represents the data point, and j represents the class. L ms is a modified loss function after passing through the depth supervision module, wherein Y and Representing a post-flat predictive probability map and a post-flat golden standard (Ground truth), and pb representing the predictive map generated by layer b. g represents gold standard and beta is multiplied coefficient. And l ce+dice is the loss function of the present invention.
In this embodiment, the learning rate is set to 1e-4, and the training batch size is 8. The hardware environment of the experiment is NVIDIA GTX1080Ti, the Intel Core i7 processor and the software environment is pytorch.
In addition, when the obtained network model is verified, the effect of training the model is evaluated. The evaluation scale in the invention adopts DICE METRIC indexes, and the accuracy of the proposed segmentation algorithm is evaluated by using a Dice index.
The general DICE METRIC indices are as follows:
Wherein A is a segmentation map, B is a ground-truth real segmentation, a|and B| are the number of voxels (voxels) of the A and B segmentation maps, respectively, and A n B| is the number of voxels of the overlapping part of the two maps.
In the step S4, the test set is segmented by using the obtained optimal network model, so as to obtain a final segmentation result.
Example two
Based on the same inventive concept, the present embodiment provides a liver tumor segmentation system based on deep learning, the principle of which solves the problem is similar to that of the liver tumor segmentation method based on deep learning, and the repetition is not repeated.
The embodiment provides a liver tumor segmentation system based on deep learning, which comprises:
A preprocessing module for preprocessing the collected data set;
The building module is used for building a network model according to the preprocessed data, wherein the network model comprises a plurality of lower convolution layers and a plurality of upper convolution layers, the lower convolution layers are connected to the upper convolution layers through jumps, and feature graphs obtained by the lower convolution layers are connected with the upper convolution layers through attention modules on the jump connection;
the training module is used for inputting the preprocessed data into the network model for training to obtain an optimal network model;
And the segmentation module is used for segmenting the CT image to be processed by utilizing the optimal network model to obtain a liver tumor region.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (8)

1. The liver tumor segmentation method based on deep learning is characterized by comprising the following steps of:
step S1: preprocessing the collected data set;
Step S2: building a network model according to the preprocessed data, wherein the network model comprises a plurality of lower convolution layers and a plurality of upper convolution layers, the lower convolution layers are connected to the upper convolution layers through jumps, and feature graphs obtained by the lower convolution layers are connected with the upper convolution layers through attention modules on the jump connection; the attention module acquires geometric information of liver tumor through deformable convolution, and simultaneously processes the feature map obtained after convolution to obtain an output feature map; the method for processing the characteristic diagram obtained after convolution comprises the following steps: changing convolution into a plurality of new feature images on jump connection, obtaining advanced information of image boundaries, calculating to obtain a feature probability image, weighting the feature probability image, and summing the weighted result and the feature image obtained by the deformable convolution module to obtain an output feature image; the specific method for obtaining the output characteristic diagram comprises the following steps: the geometric information of the liver tumor is obtained through 3*3 deformable convolution, and the specific formula is as follows:
Wherein C belongs to an output feature map, p 0 represents any position in a feature vector, p n epsilon R represents any position in a convolution kernel, and w (p n) represents a weight value of the convolution kernel at the position p n;
Meanwhile, the feature map obtained after convolution is changed into three new feature maps A, G, T through 1*1 convolution on jump connection, the sizes of the three new feature maps are H, W, M=H, W are the total pixels of the feature map, and then A, G is subjected to dot multiplication to obtain high-level information of the image boundary; and then calculating the result through a softmax function to obtain a characteristic probability map:
wherein p represents the degree of influence of the mapping of the ith pixel point to the jth pixel point, and the more characteristic of the two pixels is like to indicate that the association between the two pixels is stronger;
then weighting the obtained p, namely multiplying the obtained p by T, so as to be beneficial to the distribution of important features in the salient image;
Finally, adding the weighted result with the feature map C obtained by the deformable convolution module to obtain a final output feature map;
The operation can be expressed by the following formula:
Wherein α represents a weight distribution value, and R represents a global feature map obtained by deformable convolution and a final output feature map obtained by addition of feature maps of the attention module;
Step S3: inputting the preprocessed data into the network model for training to obtain an optimal network model;
Step S4: and dividing the CT image to be processed by using the optimal network model to obtain a liver tumor area.
2. The deep learning based liver tumor segmentation method according to claim 1, wherein: a method of preprocessing a collected data set: finding the beginning and ending positions of the liver region, limiting HU values of CT images in the dataset to a specified range, and carrying out standardization processing on the CT images; and performing slice processing on the CT image.
3. The deep learning based liver tumor segmentation method according to claim 1, wherein: the lower convolution layer comprises a plurality of convolution layers, a normalization layer, a correction linearity unit, and a maximum pooling layer.
4. The deep learning based liver tumor segmentation method according to claim 1, wherein: the upper convolution layer comprises a plurality of convolution layers, a normalization layer, a correction linear unit and an upper sampling layer.
5. The deep learning based liver tumor segmentation method according to claim 1, wherein: the network model also includes a deep supervision module.
6. The deep learning based liver tumor segmentation method of claim 5, wherein: the depth supervision module is as follows: after each deconvolution stage is finished, the obtained feature map is up-sampled to the original size according to the corresponding scaling factor, the loss calculation of the stage is carried out, and then the loss of each time is distributed with corresponding weight for summation.
7. The deep learning based liver tumor segmentation method according to claim 1, wherein: the method for inputting the preprocessed data into the network model for training comprises the following steps: firstly inputting data into a built network model, training the network model through forward propagation, outputting a predicted probability map through a softmax classifier, and selecting a loss function; according to the calculation error obtained by the loss function, back propagation is carried out, and the value of the parameter in the network model is updated; repeating the above process until the loss function value converges to the set range; and verifying the obtained network model to obtain the optimal network model.
8. A liver lesion segmentation system based on deep learning, comprising:
A preprocessing module for preprocessing the collected data set;
The building module is used for building a network model according to the preprocessed data, wherein the network model comprises a plurality of lower convolution layers and a plurality of upper convolution layers, the lower convolution layers are connected to the upper convolution layers through jumps, and feature graphs obtained by the lower convolution layers are connected with the upper convolution layers through attention modules on the jump connection; the attention module acquires geometric information of liver tumor through deformable convolution, and simultaneously processes the feature map obtained after convolution to obtain an output feature map; the method for processing the characteristic diagram obtained after convolution comprises the following steps: changing convolution into a plurality of new feature images on jump connection, obtaining advanced information of image boundaries, calculating to obtain a feature probability image, weighting the feature probability image, and summing the weighted result and the feature image obtained by the deformable convolution module to obtain an output feature image; the specific method for obtaining the output characteristic diagram comprises the following steps: the geometric information of the liver tumor is obtained through 3*3 deformable convolution, and the specific formula is as follows:
Wherein C belongs to an output feature map, p 0 represents any position in a feature vector, p n epsilon R represents any position in a convolution kernel, and w (p n) represents a weight value of the convolution kernel at the position p n;
Meanwhile, the feature map obtained after convolution is changed into three new feature maps A, G, T through 1*1 convolution on jump connection, the sizes of the three new feature maps are H, W, M=H, W are the total pixels of the feature map, and then A, G is subjected to dot multiplication to obtain high-level information of the image boundary; and then calculating the result through a softmax function to obtain a characteristic probability map:
wherein p represents the degree of influence of the mapping of the ith pixel point to the jth pixel point, and the more characteristic of the two pixels is like to indicate that the association between the two pixels is stronger;
then weighting the obtained p, namely multiplying the obtained p by T, so as to be beneficial to the distribution of important features in the salient image;
Finally, adding the weighted result with the feature map C obtained by the deformable convolution module to obtain a final output feature map;
The operation can be expressed by the following formula:
Wherein α represents a weight distribution value, and R represents a global feature map obtained by deformable convolution and a final output feature map obtained by addition of feature maps of the attention module;
the training module is used for inputting the preprocessed data into the network model for training to obtain an optimal network model;
And the segmentation module is used for segmenting the CT image to be processed by utilizing the optimal network model to obtain a liver tumor region.
CN202110049310.2A 2021-01-14 2021-01-14 Liver tumor segmentation method and system based on deep learning Active CN112750137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110049310.2A CN112750137B (en) 2021-01-14 2021-01-14 Liver tumor segmentation method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110049310.2A CN112750137B (en) 2021-01-14 2021-01-14 Liver tumor segmentation method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN112750137A CN112750137A (en) 2021-05-04
CN112750137B true CN112750137B (en) 2024-07-05

Family

ID=75651786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110049310.2A Active CN112750137B (en) 2021-01-14 2021-01-14 Liver tumor segmentation method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112750137B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223704B (en) * 2021-05-20 2022-07-26 吉林大学 Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning
CN113344935B (en) * 2021-06-30 2023-02-03 山东建筑大学 Image segmentation method and system based on multi-scale difficulty perception
CN114240962B (en) * 2021-11-23 2024-04-16 湖南科技大学 CT image liver tumor region automatic segmentation method based on deep learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889853B (en) * 2018-09-07 2022-05-03 天津大学 Tumor segmentation method based on residual error-attention deep neural network
CN109903292A (en) * 2019-01-24 2019-06-18 西安交通大学 A kind of three-dimensional image segmentation method and system based on full convolutional neural networks
CN112085677B (en) * 2020-09-01 2024-06-28 深圳先进技术研究院 Image processing method, system and computer storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure

Also Published As

Publication number Publication date
CN112750137A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN113077471B (en) Medical image segmentation method based on U-shaped network
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112750137B (en) Liver tumor segmentation method and system based on deep learning
CN112927255B (en) Three-dimensional liver image semantic segmentation method based on context attention strategy
CN111291825B (en) Focus classification model training method, apparatus, computer device and storage medium
CN110689543A (en) Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN114926477B (en) Brain tumor multi-mode MRI image segmentation method based on deep learning
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
CN111429460A (en) Image segmentation method, image segmentation model training method, device and storage medium
CN112308846B (en) Blood vessel segmentation method and device and electronic equipment
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
Popescu et al. Retinal blood vessel segmentation using pix2pix gan
CN112381846B (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
CN113706486B (en) Pancreatic tumor image segmentation method based on dense connection network migration learning
CN113034507A (en) CCTA image-based coronary artery three-dimensional segmentation method
CN117409030B (en) OCTA image blood vessel segmentation method and system based on dynamic tubular convolution
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN111127487B (en) Real-time multi-tissue medical image segmentation method
CN116091412A (en) Method for segmenting tumor from PET/CT image
CN112348818A (en) Image segmentation method, device, equipment and storage medium
CN112508902A (en) White matter high signal grading method, electronic device and storage medium
CN116309615A (en) Multi-mode MRI brain tumor image segmentation method
CN113920137B (en) Lymph node metastasis prediction method, device, equipment and storage medium
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
Wang et al. RFPNet: Reorganizing feature pyramid networks for medical image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant