CN114693671A - Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning - Google Patents

Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning Download PDF

Info

Publication number
CN114693671A
CN114693671A CN202210443111.4A CN202210443111A CN114693671A CN 114693671 A CN114693671 A CN 114693671A CN 202210443111 A CN202210443111 A CN 202210443111A CN 114693671 A CN114693671 A CN 114693671A
Authority
CN
China
Prior art keywords
dimensional
layer
lung
segmentation result
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210443111.4A
Other languages
Chinese (zh)
Other versions
CN114693671B (en
Inventor
韩晓光
石鲁越
刘周
王昌淼
李丽
罗虹虹
罗德红
于振涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese University of Hong Kong Shenzhen
Original Assignee
Chinese University of Hong Kong Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese University of Hong Kong Shenzhen filed Critical Chinese University of Hong Kong Shenzhen
Priority to CN202210443111.4A priority Critical patent/CN114693671B/en
Publication of CN114693671A publication Critical patent/CN114693671A/en
Application granted granted Critical
Publication of CN114693671B publication Critical patent/CN114693671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Abstract

The invention discloses a lung nodule semi-automatic segmentation method, a device, equipment and a medium based on deep learning, which performs two-dimensional segmentation and three-dimensional segmentation on a medical image by combining two-dimensional and three-dimensional semi-automatic lung nodule segmentation networks, determines an actual three-dimensional segmentation result based on a two-dimensional segmentation result and a three-dimensional segmentation result of each layer, can better learn global information of a global image through the two-dimensional segmentation network, can better learn three-dimensional characteristics of lung nodules through the three-dimensional segmentation network, effectively solves the problems that the three-dimensional characteristics cannot be well utilized by using the two-dimensional segmentation network alone, the global information cannot be well utilized by using the three-dimensional segmentation network alone, combines prior information of a user, effectively solves the problem that a good segmentation method cannot be selected under the prior condition without a target size, and effectively improves the segmentation performance through a multi-branch combined training mode, the problem of false positive in the full-automatic method is solved.

Description

Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
Technical Field
The invention relates to the field of computer vision and the field of medical image analysis, in particular to a method, a device, equipment and a medium for semi-automatic segmentation of lung nodules based on deep learning.
Background
Lung cancer is the first of the worldwide morbidity and mortality of malignant tumors, accounting for 11.6% of all cancers, with over 180 million people dying from lung cancer each year. The survival rates of lung cancer patients in different clinical stages are remarkably different, early accurate diagnosis of lung cancer is the key for reducing the death rate of lung cancer, and the CT screening of high-risk groups of lung cancer is globally recognized at present. However, with the widespread development of screening, a new problem arises-lung nodules are detected excessively (>2mm), and a large number of lung nodules with indeterminate properties are found. Therefore, how to make a reasonable lung nodule diagnosis and treatment strategy is a major public health problem in China. Segmentation of lung nodules is an important task in lung cancer analysis, and accurate segmentation results are needed for quantitative assessment, auxiliary diagnosis and the like of lung cancer. However, the manual segmentation of lung nodules requires a lot of manpower, and the lung nodule segmentation algorithm based on deep learning mostly adopts a fully automatic segmentation algorithm, although the time and errors of manual labeling can be reduced. However, due to different imaging parameters and modalities of the CT images, different lung nodule positions, blurred morphology, and the like, the fully automatic segmentation algorithm usually causes a false positive problem, that is, a large amount of normal tissues are erroneously determined as lung nodules, which brings great difficulty to medical diagnosis.
Disclosure of Invention
Based on this, it is necessary to provide a method, an apparatus, a computer device and a storage medium for semi-automatic segmentation of lung nodules based on deep learning, so as to solve the problem in the prior art that false positives easily occur, which results in a large amount of normal tissues being mistakenly determined as lung nodules.
In a first aspect, a method for semi-automatic learning of lung nodules based on deep learning is provided, which includes:
acquiring three-dimensional lung medical image data to be segmented, and preprocessing and enhancing the data of the three-dimensional lung medical image data to be segmented;
acquiring position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, generating an interaction information map according to the position prior information,
inputting the two-dimensional lung medical image of the interaction layer and the interaction information map into a preset interaction model to generate a two-dimensional lung nodule segmentation result;
according to the two-dimensional lung nodule segmentation result, a three-dimensional medical image of a region where a lung nodule is located is cut out from the three-dimensional lung medical image to be segmented to serve as an interest region, and the interest region is input into a preset segmentation model to generate a first three-dimensional lung nodule segmentation result;
segmenting non-segmented layers of the three-dimensional lung medical image data to be segmented layer by layer through a preset propagation model to obtain a layer-by-layer segmentation result of each layer, and generating a second three-dimensional lung nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional lung nodule segmentation result;
and determining an actual three-dimensional lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result.
In an embodiment, before the inputting the two-dimensional lung medical image of the interaction layer and the interaction information map into a preset interaction model to generate a two-dimensional lung nodule segmentation result, the method includes:
reconstructing an input layer of the interaction model;
the reconstruction process specifically includes:
modifying the two-dimensional medical image of the interaction layer and the image channel number of the interaction information image into a preset channel number;
summing the weights of the pre-trained input layers according to the channel direction to serve as the weight of a first image channel of the input layer of the interactive model;
the weights of the second image channel of the input layer of the interaction model are randomly initialized using a gaussian distribution.
In an embodiment, the segmenting an unsegmented layer of the lung medical image data layer by layer through a preset propagation model to obtain a layer-by-layer segmentation result of each layer includes:
taking the layer of the two-dimensional lung medical image which is currently segmented as a target layer, and taking the layer of the two-dimensional lung medical image which is segmented by the interactive model or the propagation model as a segmented layer
Acquiring a segmented layer deep learning feature, a lung nodule three-dimensional feature and a target layer deep learning feature, and inputting the features into the propagation model;
and performing two-dimensional lung nodule segmentation layer by taking the adjacent frames of the interaction layer as starting points through the propagation model so as to obtain a layer-by-layer segmentation result of each layer.
In one embodiment, the target layer deep learning features are obtained by:
acquiring a first key value of the two-dimensional pulmonary nodule medical image of the target layer and a second key value of the two-dimensional pulmonary nodule medical image of the segmented layer through an encoder;
processing the two-dimensional lung nodule medical image of the segmented layer through the encoder to obtain a feature map of the segmented layer;
generating a similarity matrix according to the first key value and the second key value;
and acquiring the deep learning characteristic of the target layer according to the product of the similarity matrix and the segmented layer characteristic graph.
In one embodiment, the three-dimensional feature of the lung nodule is obtained by:
respectively selecting two-dimensional feature maps corresponding to the target layer from each three-dimensional deep learning feature map output by the preset segmentation model;
and according to the size of the region of interest, reducing the two-dimensional feature map to a feature map with the size consistent with that of the two-dimensional lung image of the target layer, and taking the feature map as the three-dimensional feature of the lung nodule.
In an embodiment, said generating an interaction information map according to the location prior information includes:
when the position prior information is input in a clicking mode, generating a first interactive information graph, and generating a Gaussian distribution thermodynamic diagram with a preset size in the first interactive information graph by taking a clicking position as a central point, wherein the pixel value of the central point in the first interactive information graph is 1, the pixel value of a boundary is 0, and the pixel value of a region outside the Gaussian distribution thermodynamic diagram is 0;
and when the position prior information is recorded in a scribing mode, generating a second interactive information graph, wherein the pixels of the scribing region in the second interactive information graph are 1, and the pixels of the rest regions are 0.
In an embodiment, the segmenting the three-dimensional medical image of the region where the lung nodule is located in the pulmonary medical image data according to the two-dimensional lung nodule segmentation result includes:
taking the central point of the two-dimensional lung nodule segmentation result as the central point of the interest region;
and cutting out the three-dimensional medical image of the region where the lung nodule is located according to the maximum diameter of the lung nodule as the side length.
In an embodiment, the determining an actual lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result includes:
calculating a mean of the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result to determine a segmentation probability;
and taking the region with the segmentation probability larger than a preset threshold value as a lung nodule region, and outputting the actual three-dimensional lung nodule segmentation result.
In a second aspect, a lung nodule semi-automatic segmentation apparatus based on deep learning is provided, including:
the system comprises a three-dimensional lung medical image data acquisition unit, a data processing unit and a data processing unit, wherein the three-dimensional lung medical image data acquisition unit is used for acquiring three-dimensional lung medical image data to be segmented, and preprocessing and data enhancing the three-dimensional lung medical image data to be segmented;
a position prior information obtaining unit, configured to obtain position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, and generate an interaction information map according to the position prior information,
the two-dimensional lung nodule segmentation result generation unit is used for inputting the two-dimensional lung medical image of the interaction layer and the interaction information map into a preset interaction model so as to generate a two-dimensional lung nodule segmentation result;
the first three-dimensional pulmonary nodule segmentation result generation unit is used for cutting out a three-dimensional medical image of a region where a pulmonary nodule is located from the three-dimensional pulmonary medical image to be segmented according to the two-dimensional pulmonary nodule segmentation result to serve as an interest region, and inputting the interest region into a preset segmentation model to generate a first three-dimensional pulmonary nodule segmentation result;
the second three-dimensional pulmonary nodule segmentation result generation unit is used for segmenting an unsegmented layer of the three-dimensional pulmonary medical image data to be segmented layer by layer through a preset propagation model so as to obtain a layer-by-layer segmentation result of each layer, and generating a second three-dimensional pulmonary nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional pulmonary nodule segmentation result;
and the actual three-dimensional lung nodule segmentation result determining unit is used for determining an actual three-dimensional lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result.
In a third aspect, a computer device is provided, comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor implementing the method for semi-automatic segmentation of lung nodules based on deep learning as described above when executing the computer readable instructions.
In a fourth aspect, a readable storage medium of computer readable instructions is provided, which when executed by one or more processors, causes the one or more processors to perform the method of deep learning based semi-automatic segmentation of lung nodules as described above.
The method, the device, the computer equipment and the storage medium for the semi-automatic segmentation of the pulmonary nodules based on the deep learning comprise the following steps: acquiring three-dimensional lung medical image data to be segmented, and preprocessing and enhancing the data of the three-dimensional lung medical image data to be segmented; acquiring position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, generating an interaction information graph according to the position prior information, and inputting the two-dimensional lung medical image of the interaction layer and the interaction information graph into a preset interaction model to generate a two-dimensional lung nodule segmentation result; according to the two-dimensional lung nodule segmentation result, a three-dimensional medical image of a region where a lung nodule is located is cut out from the three-dimensional lung medical image to be segmented to serve as an interest region, and the interest region is input into a preset segmentation model to generate a first three-dimensional lung nodule segmentation result; segmenting non-segmented layers of the three-dimensional lung medical image data to be segmented layer by layer through a preset propagation model to obtain a layer-by-layer segmentation result of each layer, and generating a second three-dimensional lung nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional lung nodule segmentation result; and determining an actual three-dimensional lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result. According to the method, the medical image is subjected to two-dimensional segmentation and three-dimensional segmentation by combining a two-dimensional segmentation network and a three-dimensional segmentation network, global information of a global image can be better learned through the two-dimensional segmentation, three-dimensional characteristics of pulmonary nodules can be better learned through the three-dimensional segmentation, the problems that the three-dimensional characteristics cannot be well utilized by the two-dimensional segmentation network alone, the global information cannot be well utilized by the three-dimensional segmentation network alone are effectively solved, the problem that a good cutting method cannot be selected under the prior condition without the target size is effectively solved according to the prior information of a user, the segmentation performance is effectively improved through a multi-branch combined training mode, and the problem of false positive existing in a full-automatic method is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic diagram of an application environment of a method for semi-automatic segmentation of lung nodules based on deep learning according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a lung nodule semi-automatic segmentation method based on deep learning according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating a method for processing propagation model data according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a segmentation effect of a model prediction result and an artificial labeling result according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a lung nodule semi-automatic segmentation apparatus based on deep learning according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a computing device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for semi-automatically segmenting lung nodules based on deep learning provided by the embodiment can be applied to an application environment shown in fig. 1, wherein acquired three-dimensional lung medical image data to be segmented are preprocessed and data enhanced, user prior information is recorded in any layer of the three-dimensional lung medical image data to be segmented, the preprocessed and data enhanced data are input into an interaction model to be processed, a two-dimensional lung nodule segmentation result is obtained, an interest region is cut out according to the two-dimensional segmentation result, and the interest region is input into the three-dimensional segmentation model to be processed, so that a first three-dimensional lung nodule segmentation result is obtained; meanwhile, the propagation model carries out layer-by-layer segmentation on the non-segmented layer in the three-dimensional pulmonary medical image data to be segmented, and the segmented layer-by-layer segmentation is combined with the two-dimensional pulmonary nodule segmentation result to form a second three-dimensional pulmonary nodule segmentation result; and calculating a final result based on the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result, so that an actual three-dimensional lung nodule segmentation result can be obtained.
In one embodiment, as shown in fig. 2, a method for semi-automatic segmentation of lung nodules based on deep learning is provided, which includes the following steps:
in step S110, three-dimensional lung medical image data to be segmented is acquired, and preprocessing and data enhancement are performed on the three-dimensional lung medical image data to be segmented;
in the embodiment of the present application, the three-dimensional pulmonary medical image data to be segmented may include magnetic resonance image data, computed tomography image data, and positron emission tomography image data, and the image data may be obtained by manually labeling a lung nodule region in an image by a professional imaging physician in advance.
In an embodiment of the present application, the read clinical medical images may be divided into a training set, a validation set, and a test set. For training, testing and validation of a deep learning based semi-automatic segmentation model of lung nodules.
In the present application, the medical image data of the three-dimensional image to be segmented may specifically include a plurality of two-dimensional CT (computed tomogry) images, for example, 200 × 512 in size, i.e., a total of 200 two-dimensional CT images 512 × 512.
In the embodiment of the application, the preprocessing and data enhancement of the three-dimensional lung medical image data to be segmented specifically include that the acquired three-dimensional lung medical image data to be segmented is resampled to fix the resolution to 1 × 1, and then the HU value of the data is clipped to the region of [ -1200, 600] and normalized to [0, 1 ]. The region of the lung image may then be segmented using a watershed segmentation algorithm and the segmentation region expanded using a dilation algorithm, and only the pixel values in the image that lie within the segmentation region are retained.
In the embodiment of the application, in the process of training the model, the three-dimensional medical lung image data can be divided into a training set, a verification set and a test set according to the ratio of 3:1:1, and the three data sets respectively contain three-dimensional lung nodule regions manually labeled by experienced imaging physicians, wherein the labeling result of the test set is only used for performance evaluation of the final model and is not used in the training stage of the model.
In step S120, obtaining position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, and generating an interaction information map according to the position prior information;
in this embodiment of the present application, the position prior information may be provided by a user in a click or line drawing manner in a lung nodule region of a two-dimensional lung nodule image including a lung nodule in any layer of the three-dimensional image data to be segmented.
Wherein, the layer where the two-dimensional pulmonary nodule image recorded with the position prior information is located is an interaction layer.
In this embodiment, assuming that the two-dimensional lung nodule image has a height H and a width W, when the position prior information is recorded by a click method, an image with a size H × W may be generated, where a gaussian distribution thermodynamic diagram with a radius of 5 pixels is generated with a user click position as a center, a central point pixel value of the gaussian distribution thermodynamic diagram is 1, a boundary is 0, and pixel values of regions outside the gaussian distribution thermodynamic diagram are all 0, and the gaussian distribution thermodynamic diagram may be used as an interactive information diagram.
When the position prior information is recorded in a scribing mode, generating an image with the size of H x W, wherein the pixel value of the area scribed by the user is 1, the pixel values of the rest positions are 0, and taking the image as an interactive information graph.
In step S130, the two-dimensional lung medical image of the interaction layer and the interaction information map are input into a preset interaction model to generate a two-dimensional lung nodule segmentation result;
in the embodiment of the application, the two-dimensional lung medical image of the interaction layer, namely the two-dimensional lung medical image marked with the position prior information and the interaction information image are used as input data and input into the interaction model, the image and the two-dimensional lung nodule deep learning feature are extracted through the interaction model, and finally the lung nodule segmentation result on the two-dimensional medical image is output.
In the embodiment of the application, the interaction model adopts a DeepLabv3+ network structure, and the whole framework consists of an encoder and a decoder. The encoder part uses a deep convolution neural network of cavity convolution, adopts a common model ResNet as a specific structure, and extracts multi-scale information from output characteristics through a spatial pyramid pooling module with cavity convolution; the decoder part further fuses the bottom layer characteristics and the high layer characteristics and outputs a segmentation result, so that the segmentation precision is improved.
In step S140, according to the two-dimensional lung nodule segmentation result, a three-dimensional medical image of a region where a lung nodule is located is cut out from the three-dimensional lung medical image to be segmented, and the three-dimensional medical image is used as an interest region, and the interest region is input into a preset segmentation model to generate a first three-dimensional lung nodule segmentation result;
in the embodiment of the application, the preset segmentation model may be a three-dimensional convolutional neural network model, and the specific structure of the model may be a model 3D-UNet.
In the embodiment of the application, the position and the size of the interest region can be determined according to the two-dimensional lung nodule segmentation result, the center point of the two-dimensional lung nodule segmentation result is used as the center point of the interest region, the possible maximum diameter of a lung nodule is used as the side length to cut out the three-dimensional interest region in the three-dimensional medical image data to be segmented, then the three-dimensional lung nodule deep learning feature is extracted through the three-dimensional convolution neural network model, and the first three-dimensional lung nodule segmentation result is predicted.
In step S150, segmenting, layer by layer, an unsegmented layer of the three-dimensional pulmonary medical image data to be segmented through a preset propagation model to obtain a layer-by-layer segmentation result of each layer, and generating a second three-dimensional pulmonary nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional pulmonary nodule segmentation result;
in the embodiment of the application, a layer being segmented is recorded as a target layer, a layer segmented by an interactive model or a propagation model is recorded as a segmented layer, and based on the segmented layer pulmonary nodule depth learning feature, the pulmonary nodule three-dimensional feature and the target layer pulmonary nodule depth learning feature, the two-dimensional propagation model is used for performing two-dimensional pulmonary nodule segmentation on an unsegmented layer by layer so as to obtain a layer-by-layer segmentation result of each layer. That is, the segmentation result of each target layer can be obtained by the target layer depth learning feature, the segmented layer depth learning feature and the lung nodule three-dimensional feature through the decoder prediction of the propagation model.
In an embodiment of the present application, the deep learning feature of the target layer may be predicted from the two-dimensional lung nodule image of the target layer through a network of encoders. Specifically, key values and feature values of the target layer and the segmented layer can be output through the encoder network, and the target layer and the segmented layer are matched by using the key values, so that the feature value for segmenting the target layer is extracted from the feature values of the segmented layer and is used as the deep learning feature of the target layer.
In an embodiment of the application, the three-dimensional lung nodule feature is obtained by respectively selecting a feature layer at a target layer position from each three-dimensional depth learning feature output by a preset segmentation model and then reversely cutting the interest region.
In the embodiment of the present application, after the target layer depth learning feature, the three-dimensional lung nodule feature, and the segmented layer depth learning feature are obtained, the target layer depth learning feature and the segmented layer depth learning feature are input to a decoder, and meanwhile, the three-dimensional lung nodule features of different feature levels are added to the same position in the decoder, so as to output the layer-by-layer segmentation result, and then the layer-by-layer segmentation result and the two-dimensional lung nodule segmentation result are combined to form a second three-dimensional lung nodule segmentation result.
In the embodiment of the present application, when the propagation model performs layer-by-layer segmentation, two adjacent frames of an interaction layer may be used as starting points, and the layer-by-layer segmentation is performed towards two sides respectively.
In step S160, an actual three-dimensional lung nodule segmentation result is determined according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result.
In the embodiment of the present application, after the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result are obtained, a mean value between the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result is calculated, and when the mean value is greater than a region with a preset threshold, the actual three-dimensional lung nodule segmentation result is obtained.
The first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result may be three-dimensional lung nodule segmentation probability.
The preset threshold may be a specific numerical value, for example, 0.5% or 50%, and may be specifically set according to an actual situation, which is not limited herein.
The lung nodule semi-automatic segmentation method based on deep learning comprises the following steps: acquiring three-dimensional lung medical image data to be segmented, and preprocessing and enhancing the data of the three-dimensional lung medical image data to be segmented; acquiring position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, generating an interaction information graph according to the position prior information, and inputting the two-dimensional lung medical image of the interaction layer and the interaction information graph into a preset interaction model to generate a two-dimensional lung nodule segmentation result; according to the two-dimensional lung nodule segmentation result, a three-dimensional medical image of a region where a lung nodule is located is cut out from the three-dimensional lung medical image to be segmented to serve as an interest region, and the interest region is input into a preset segmentation model to generate a first three-dimensional lung nodule segmentation result; segmenting non-segmented layers of the three-dimensional lung medical image data to be segmented layer by layer through a preset propagation model to obtain a layer-by-layer segmentation result of each layer, and generating a second three-dimensional lung nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional lung nodule segmentation result; and determining an actual three-dimensional lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result. According to the method, the medical image is subjected to two-dimensional segmentation and three-dimensional segmentation by combining a two-dimensional segmentation network and a three-dimensional segmentation network, global information of a global image can be better learned through the two-dimensional segmentation, three-dimensional characteristics of pulmonary nodules can be better learned through the three-dimensional segmentation, the problems that the three-dimensional characteristics cannot be well utilized by the two-dimensional segmentation network alone, the global information cannot be well utilized by the three-dimensional segmentation network alone are effectively solved, the problem that a good cutting method cannot be selected under the prior condition without the target size is effectively solved according to the prior information of a user, the segmentation performance is effectively improved through a multi-branch combined training mode, and the problem of false positive existing in a full-automatic method is solved.
In an embodiment, the present application further provides an implementation process of a lung nodule semi-automatic segmentation method based on deep learning, including:
in step S110, three-dimensional lung medical image data to be segmented is acquired, and preprocessing and data enhancement are performed on the three-dimensional lung medical image data to be segmented;
in the embodiment of the present application, the three-dimensional pulmonary medical image data to be segmented may include magnetic resonance image data, computed tomography image data, and positron emission tomography image data, and the image data may be obtained by manually labeling a lung nodule region in an image by a professional imaging physician in advance.
In an embodiment of the present application, the read clinical medical images may be divided into a training set, a validation set, and a test set. For training, testing and validation of a deep learning based semi-automatic segmentation model of lung nodules.
In the present application, the medical image data of the three-dimensional image to be segmented may specifically include a plurality of two-dimensional CT (computed tomogry) images, for example, 200 × 512 in size, i.e., a total of 200 two-dimensional CT images 512 × 512.
In the embodiment of the application, the preprocessing and the data enhancement of the three-dimensional lung medical image data to be segmented specifically include that the acquired three-dimensional lung medical image data to be segmented is subjected to fixed resolution of 1 × 1 by adopting a resampling mode, and then the HU value of the data is cut to the region of [ -1200, 600] and is normalized to [0, 1 ]. The region of the lung image may then be segmented using a watershed segmentation algorithm and the segmentation region expanded using a dilation algorithm, and only the pixel values in the image that lie within the segmentation region are retained.
In the embodiment of the application, in the process of training the model, the three-dimensional medical lung image data can be divided into a training set, a verification set and a test set according to the ratio of 3:1:1, and the three data sets respectively contain three-dimensional lung nodule regions manually labeled by experienced imaging physicians, wherein the labeling result of the test set is only used for performance evaluation of the final model and is not used in the training stage of the model.
In step S120, obtaining position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, and generating an interaction information map according to the position prior information;
in this embodiment of the present application, the position prior information may be provided by a user in a click or line drawing manner in a lung nodule region of a two-dimensional lung nodule image including a lung nodule in any layer of the three-dimensional image data to be segmented.
Wherein, the layer where the two-dimensional pulmonary nodule image recorded with the position prior information is located is an interaction layer.
In an embodiment of the present application, generating an interaction information map according to the location prior information includes:
when the position prior information is input in a clicking mode, generating a first interactive information graph, and generating a Gaussian distribution thermodynamic diagram with a preset size in the first interactive information graph by taking a clicking position as a central point, wherein the pixel value of the central point in the first interactive information graph is 1, the pixel value of a boundary is 0, and the pixel value of a region outside the Gaussian distribution thermodynamic diagram is 0;
and when the position prior information is recorded in a scribing mode, generating a second interactive information graph, wherein the pixels of the scribing region in the second interactive information graph are 1, and the pixels of the rest regions are 0.
In this embodiment, assuming that the two-dimensional pulmonary nodule image has a height H and a width W, when the position prior information is recorded by a click method, an image with a size H × W may be generated, wherein a gaussian distribution thermodynamic diagram with a radius of 5 pixels is generated with a user click position as a center, a central point pixel value of the gaussian distribution thermodynamic diagram is 1, a boundary is 0, and pixel values of regions outside the gaussian distribution thermodynamic diagram are all 0, and the gaussian distribution thermodynamic diagram may be used as an interactive information diagram.
When the position prior information is recorded in a scribing mode, generating an image with the size of H x W, wherein the pixel value of the area scribed by the user is 1, the pixel values of the rest positions are 0, and taking the image as an interactive information graph.
Wherein, the preset size may be 5 pixels.
In step S130, the two-dimensional lung medical image of the interaction layer and the interaction information map are input into a preset interaction model to generate a two-dimensional lung nodule segmentation result;
in the embodiment of the application, the two-dimensional lung medical image of the interaction layer, namely the two-dimensional lung medical image marked with the position prior information and the interaction information map are used as input data and input into the interaction model, the image and the two-dimensional lung nodule depth learning characteristics are extracted through the interaction model, and finally the lung nodule segmentation result of the two-dimensional lung medical image is output.
In the embodiment of the application, the interaction model adopts a DeepLabv3+ network structure, and the whole framework consists of an encoder and a decoder. The encoder part uses a deep convolution neural network of cavity convolution, adopts a common model ResNet as a specific structure, and extracts multi-scale information from output characteristics through a spatial pyramid pooling module with cavity convolution; the decoder part further fuses the bottom layer characteristics and the high layer characteristics and outputs a segmentation result, so that the segmentation precision is improved.
In an embodiment of the application, before the inputting the two-dimensional lung medical image of the interaction layer and the interaction information map into a preset interaction model to generate a two-dimensional lung nodule segmentation result, the method includes:
reconstructing an input layer of the interaction model;
the reconstruction process specifically includes:
modifying the two-dimensional medical image of the interaction layer and the image channel number of the interaction information image into a preset channel number;
summing the weights of the pre-trained input layers according to the channel direction to serve as the weight of a first image channel of the input layer of the interactive model;
the weights of the second image channel of the input layer of the interaction model are randomly initialized using a gaussian distribution.
In the embodiment of the present application, pretrained weights based on ImageNet are generally used as initial weights of the interaction model. However, because the input of the model is different, the input layer of the interactive model needs to be reconstructed, and the specific reconstruction mode is to change the number of channels of the input image to 2, add the weight pre-trained by ImageNet according to the channel direction as the weight of the first image channel of the modified input layer, and randomly initialize the weight of the second image channel of the input layer by adopting gaussian distribution. For example, the input layer initial weight is n × 2 × k, where the n × 1 × k weight of the first channel can be obtained by summing the ImageNet pre-training weights, e.g., the value at the [0, 0, 0, 0] position in the n × 1 × k weights is the value of [0, 0, 0] + [0, 1, 0, 0] + [0, 2, 0, 0] in the ImageNet weight, and the weight of the second channel can be randomly initialized by gaussian distribution.
In an embodiment of the application, when the interactive model is in a training state, the loss can be directly calculated for the output result of the interactive model and the manual labeling result through a loss function, and the interactive model is iteratively trained based on the loss.
Wherein, the loss calculation formula is as follows:
LD+Ltopk
wherein L isDIs the loss of the Dice coefficient and is used for measuring the overlapping degree between the model prediction result and the real label, p represents the model prediction segmentation result, LtopkThe loss function is used to make the model more focused on samples that are difficult to segment during the training process.
Figure BDA0003614866640000161
Where g denotes the artificial labeling result, i denotes the ith pixel of the image, H, W denotes the height and width of the image, respectively, and e-1 e-7 is a smoothing term to prevent the denominator from being zero.
Figure BDA0003614866640000162
Wherein g represents an artificial labeling result, i represents the ith pixel of the image, p represents a model prediction segmentation result, 1{ g }i=c and pic<t represents a binary indicator function, t is a threshold,
Figure BDA0003614866640000163
representing a logarithmic value representing a probability value of the prediction,
Figure BDA0003614866640000164
representing the sum of all samples, N is the number of samples,
Figure BDA0003614866640000165
represents the sum of the losses of all segmentation classes, only one class of lung nodules in this application, i.e., C ═ 1.
In the embodiment of the application, when the loss function value is continuously decreased, the network parameters are updated and the network weight is saved through back propagation and random gradient decrease, and the training is stopped after the loss function value is converged.
In the embodiment of the present application, the specific settings of the training phase are as follows: the model optimizer is an Adam optimizer with weight attenuation of 1e-4, the learning rate is 1e-4, the maximum iteration frequency is 1000, the learning rate is reduced to 0.1 time after 500 iterations, the batch size is set to 6, and the V100 video card is used for training.
In step S140, according to the two-dimensional lung nodule segmentation result, a three-dimensional medical image of a region where a lung nodule is located is cut out from the three-dimensional lung medical image to be segmented, so as to serve as a region of interest, and the region of interest is input into a preset segmentation model, so as to generate a first three-dimensional lung nodule segmentation result;
in the embodiment of the application, the preset segmentation model may be a three-dimensional convolutional neural network model, and the specific structure of the model may be a model 3D-UNet.
In an embodiment of the present application, the segmenting the three-dimensional medical image of the region where the lung nodule is located in the pulmonary medical image data according to the two-dimensional lung nodule segmentation result includes:
taking the central point of the two-dimensional lung nodule segmentation result as the central point of the interest region;
and cutting out the three-dimensional medical image of the region where the lung nodule is located according to the maximum diameter of the lung nodule as the side length.
Specifically, an interest region is cut out based on the two-dimensional lung nodule segmentation result, specifically, the center point of the two-dimensional lung nodule segmentation result is the center of the three-dimensional cutting region, the interest region is cut out by using the preset pixel size as the side length, for example, the interest region is cut out by using 96-bit pixels as the side length, on one hand, the lung nodule is ensured to be completely contained in the cut region class, and on the other hand, the calculation of the model is facilitated. The preset pixel size may be determined according to the maximum value of the actual side length of the lung nodule.
In the embodiment of the present application, when performing clipping with 96-bit pixels as a side length, the clipped three-dimensional region of interest with a size of 96 × 96 × 96 is input into the 3D-UNet model, and the model can output the first three-dimensional lung nodule segmentation result and a three-dimensional lung nodule deep learning feature, which can be a feature map. Specifically, the number of layers, height, and width of the feature map output from the model are respectively designated as D, H, W, and the feature map size is gradually changed from one to another in the UNet decoder section
Figure BDA0003614866640000181
Through convolution and upsampling to DxHxW, the feature map of the last layer in each feature level (the feature map of the DxHxW level before the feature map of the selected model output layer) is selected as the three-dimensional feature map of the model output, namely the feature map can contain the feature map with the size of D x H x W
Figure BDA0003614866640000182
Four three-dimensional feature maps of DxHxW.
In an embodiment of the present application, when the preset segmentation model is in a training state, the same cutting method may be used to cut out a corresponding three-dimensional interest region from the true-valued label, and input the three-dimensional interest region into the model, and calculate a loss from the segmentation result in the output result of the model and the true-valued label, where the loss function is the same as the loss function used by the interactive model, and only extends from the two-dimensional image to the three-dimensional image, and then updates the network parameters in a back propagation and gradient descent manner. And will not be described in detail herein.
Wherein the truth label is the lung nodule area manually labeled by the doctor.
In an embodiment of the present application, the specific settings of the training phase are as follows: the model optimizer is an Adam optimizer with weight attenuation of 1e-4, the learning rate is 1e-3, the maximum iteration frequency is 1000, the learning rate is reduced to 0.1 time after 500 iterations, the batch size is set to 4, and the V100 video card is used for training.
In step S150, segmenting, layer by layer, an unsegmented layer of the three-dimensional pulmonary medical image data to be segmented through a preset propagation model to obtain a layer-by-layer segmentation result of each layer, and generating a second three-dimensional pulmonary nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional pulmonary nodule segmentation result;
in an embodiment of the present application, segmenting, layer by layer, an unsegmented layer of the three-dimensional pulmonary medical image data to be segmented through a preset propagation model to obtain a layer-by-layer segmentation result of each layer, including:
taking the layer of the two-dimensional lung medical image which is segmented currently as a target layer, and taking the layer of the two-dimensional lung medical image which is segmented by the interactive model or the propagation model as a segmented layer;
acquiring a segmented layer deep learning feature, a lung nodule three-dimensional feature and a target layer deep learning feature, and inputting the features into the propagation model;
and performing two-dimensional lung nodule segmentation layer by taking the adjacent frames of the interaction layer as starting points through the propagation model so as to obtain a layer-by-layer segmentation result of each layer.
In the embodiment of the application, a layer being segmented is recorded as a target layer, a layer segmented by an interactive model or a propagation model is recorded as a segmented layer, and based on the segmented layer pulmonary nodule depth learning feature, the pulmonary nodule three-dimensional feature and the target layer pulmonary nodule depth learning feature, the two-dimensional propagation model is used for performing two-dimensional pulmonary nodule segmentation on an unsegmented layer by layer so as to obtain a layer-by-layer segmentation result of each layer. That is, the segmentation result of each target layer can be obtained by the target layer depth learning feature, the segmented layer depth learning feature and the lung nodule three-dimensional feature through the decoder prediction of the propagation model.
In an embodiment of the application, the interaction layer is a layer where a two-dimensional lung nodule image with position prior information is located, and the propagation model may perform layer-by-layer segmentation towards both sides respectively with adjacent frames of the interaction layer as starting points, so as to obtain a layer-by-layer segmentation result of each layer.
In this embodiment of the present application, the target layer deep learning feature may be obtained by:
acquiring a first key value of the two-dimensional pulmonary nodule medical image of the target layer and a second key value of the two-dimensional pulmonary nodule medical image of the segmented layer through an encoder;
processing the two-dimensional lung nodule medical image of the segmented layer through the encoder to obtain a feature map of the segmented layer;
generating a similarity matrix according to the first key value and the second key value;
and acquiring the deep learning characteristic of the target layer according to the product of the similarity matrix and the segmented layer characteristic graph.
Specifically, assuming that the size of the first key value of the target layer is h × w × c, the size of the second key value of the segmented layer is n × h × w × c, multiplying the first key value by the second key value, and performing activation processing through an activation function to obtain a matrix with the size of hw × nhw, representing the similarity between each position of the target layer and each position of the segmented layer, and multiplying the matrix by the feature map of the segmented layer to obtain the feature map of the target layer as the deep learning feature of the target layer.
Wherein the activation function may be a softmax function.
In one embodiment of the present application, the three-dimensional feature of the lung nodule is obtained by:
respectively selecting two-dimensional feature maps corresponding to the target layer from each three-dimensional deep learning feature map output by the preset segmentation model;
and according to the size of the region of interest, reducing the two-dimensional feature map to a feature map with the size consistent with that of the two-dimensional lung image of the target layer, and taking the feature map as the three-dimensional feature of the lung nodule.
Specifically, a feature layer of the target layer position is respectively selected from each three-dimensional deep learning feature map output by a preset segmentation model, and then the three-dimensional feature of the pulmonary nodule is obtained according to the reverse direction of the step of cutting the interest region, that is, a two-dimensional feature map corresponding to the target layer position is selected from each three-dimensional deep learning feature volume and is restored to a feature map corresponding to the target layer size according to the cutting position. As a three-dimensional feature of the lung nodule.
Referring to fig. 3, in the embodiment of the present application, a first key value is obtained by performing key value extraction on a two-dimensional lung nodule image of a target layer, a second key value is obtained by performing key value extraction on a two-dimensional lung nodule image of a segmented layer, a target layer depth learning feature can be obtained based on the first key value and the second key value, the target layer depth learning feature and the segmented layer depth learning feature are input to a decoder, and three-dimensional lung nodule features of different feature levels are added to the same position in the decoder, so as to output a segmentation result of the target layer.
In the embodiment of the application, when the propagation model is in a training stage, 5 layers of two-dimensional lung nodule images can be randomly selected from one three-dimensional lung nodule image data as a sample, a first layer image and a label in the 5 layers of two-dimensional lung nodule images are used as a segmented layer image and a segmentation result, a second layer image is used as a target layer image, a feature map of a layer where the second layer image is located is selected from a three-dimensional feature map output by a preset segmentation model and used as a lung nodule three-dimensional feature input propagation model, the segmentation result of the second layer is output after being processed by the propagation model, the segmentation result of the second layer and the loss of the segmentation label of the second layer image are calculated through a loss function, and the calculation mode of the loss function is the same as that of the interactive model. And then taking the images of the first two layers as segmented images, taking the segmentation results of the first layer label and the second layer as segmented layer segmentation results, taking the image of the third layer as a target layer image, selecting a feature map of the layer where the image of the third layer is located from three-dimensional feature maps output by a preset segmentation model as a lung nodule three-dimensional feature input propagation model to obtain a segmentation result of the image of the third layer, and calculating the loss between the segmentation result and the segmentation label of the image of the third layer by adopting the same loss function calculation mode. And calculating the segmentation results and the loss functions of the fourth layer and the fifth layer in the same way, and then updating the network parameters according to back propagation and random gradient descent.
In the embodiment of the present application, the specific settings of the training phase are as follows: the model optimizer is an Adam optimizer with weight attenuation of 1e-4, the learning rate is 1e-4, the maximum iteration frequency is 2000, the learning rate is reduced to 0.1 time after 500 iterations, the batch size is set to 4, and the V100 video card is used for training.
In step S160, an actual three-dimensional lung nodule segmentation result is determined according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result.
In this embodiment of the present application, when the propagation model is in the test phase, the two-dimensional lung nodule segmentation result output from the interaction layer image through the interaction model may be used as the segmented layer image and the segmentation result, the adjacent layer above the interaction layer is used as the target layer, the propagation model is input to the propagation model, the segmentation result of the adjacent layer is obtained, then the interaction layer and the adjacent layer are used as the segmented layer, the adjacent layer above the adjacent layer is used as the target layer, the propagation model is input, the segmentation result of each layer above the interaction layer is obtained according to a layer-by-layer segmentation method (when the segmentation result does not include any lung nodule in a layer, propagation is stopped), and similarly, the segmentation result of each layer below the interaction layer is obtained through a layer-by layer segmentation method, thereby achieving layer-by-layer segmentation.
In an embodiment of the present application, the determining an actual lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result includes:
calculating a mean of the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result to determine a segmentation probability;
and taking the region with the segmentation probability larger than a preset threshold value as a lung nodule region, and outputting the actual three-dimensional lung nodule segmentation result.
The first and second three-dimensional lung nodule segmentation results may be three-dimensional lung nodule segmentation probabilities. I.e. the probability value that each pixel (voxel) point is a lung nodule.
The preset threshold may be a specific numerical value, for example, 0.5% or 50%, and may be specifically set according to an actual situation, which is not limited herein.
In this embodiment, after the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result are obtained, a mean value between the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result is calculated, and when the mean value is greater than a preset threshold, the actual three-dimensional lung nodule segmentation result is obtained.
Referring to fig. 4, the first column is a lung medical image, the second column is an actual three-dimensional segmentation result generated by using a lung nodule semiautomatic model based on deep learning, and the third column is a manual labeling result, as can be seen from fig. 4, the similarity between the actual three-dimensional segmentation result generated by using the lung nodule semiautomatic labeling model based on deep learning and the manual labeling result is very high, which can show that the lung nodule semiautomatic model based on deep learning has high accuracy, and experiments on a plurality of public data sets can prove that the method is superior to a plurality of advanced segmentation methods in a three-dimensional medical segmentation task, and can effectively solve the false positive problem existing in a full-automatic method, and meanwhile, the method provided by the application can be easily applied to other three-dimensional medical image segmentation tasks.
According to the method, the medical image is subjected to two-dimensional segmentation and three-dimensional segmentation by combining a two-dimensional segmentation network and a three-dimensional segmentation network, global information of a global image can be better learned through the two-dimensional segmentation, three-dimensional characteristics of pulmonary nodules can be better learned through the three-dimensional segmentation, the problem that the three-dimensional characteristics cannot be well utilized by the two-dimensional network alone and the global information cannot be well utilized by the three-dimensional network alone is effectively solved, the problem that a good cutting method cannot be selected under the prior condition without the target size is effectively solved according to the prior information of a user, the segmentation performance is effectively improved through a multi-branch combined training mode, and the problem of false positive existing in a full-automatic method is solved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not limit the implementation process of the embodiments of the present invention in any way.
In an embodiment, a deep learning based semi-automatic lung nodule segmentation apparatus is provided, and the deep learning based semi-automatic lung nodule segmentation apparatus corresponds to the deep learning based semi-automatic lung nodule segmentation method in the above embodiment one to one. As shown in fig. 5, the apparatus for semi-automatically segmenting lung nodule based on deep learning includes: the system comprises a three-dimensional lung medical image data acquisition unit 10, a position prior information acquisition unit 20, a two-dimensional lung nodule segmentation result generation unit 30, a first three-dimensional lung nodule segmentation result generation unit 40, a second three-dimensional lung nodule segmentation result generation unit 50 and an actual three-dimensional lung nodule segmentation result determination unit 60.
The system comprises a three-dimensional lung medical image data acquisition unit 10, a data processing unit and a data processing unit, wherein the three-dimensional lung medical image data acquisition unit is used for acquiring three-dimensional lung medical image data to be segmented, and preprocessing and data enhancing the three-dimensional lung medical image data to be segmented;
a position prior information obtaining unit 20, configured to obtain position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, and generate an interaction information map according to the position prior information,
a two-dimensional pulmonary nodule segmentation result generating unit 30, configured to input the two-dimensional pulmonary medical image of the interaction layer and the interaction information map into a preset interaction model to generate a two-dimensional pulmonary nodule segmentation result;
a first three-dimensional pulmonary nodule segmentation result generation unit 40, configured to cut out, according to the two-dimensional pulmonary nodule segmentation result, a three-dimensional medical image of a region where a pulmonary nodule is located in the three-dimensional pulmonary medical image to be segmented, so as to serve as an interest region, and input the interest region into a preset segmentation model, so as to generate a first three-dimensional pulmonary nodule segmentation result;
a second three-dimensional pulmonary nodule segmentation result generation unit 50, configured to segment, layer by layer, an unsegmented layer of the three-dimensional pulmonary medical image data to be segmented through a preset propagation model to obtain a layer-by-layer segmentation result of each layer, and generate a second three-dimensional pulmonary nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional pulmonary nodule segmentation result;
and an actual three-dimensional lung nodule segmentation result determining unit 60, configured to determine an actual three-dimensional lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result.
In an embodiment, the apparatus for semi-automatic segmentation of lung nodules based on deep learning further includes a reconstruction unit configured to:
reconstructing an input layer of the interaction model;
the reconstruction unit is further configured to: the method specifically comprises the following steps:
modifying the two-dimensional medical image of the interaction layer and the image channel number of the interaction information image into a preset channel number;
summing the weights of the pre-trained input layers according to the channel direction to serve as the weight of a first image channel of the input layer of the interactive model;
the weights of the second image channel of the input layer of the interaction model are randomly initialized using a gaussian distribution.
In an embodiment, the second three-dimensional lung nodule segmentation result generating unit 50 is further configured to:
taking the layer of the two-dimensional lung medical image which is segmented currently as a target layer, and taking the layer of the two-dimensional lung medical image which is segmented by the interactive model or the propagation model as a segmented layer;
acquiring a segmented layer deep learning feature, a lung nodule three-dimensional feature and a target layer deep learning feature, and inputting the features into the propagation model;
and performing layer-by-layer two-dimensional lung nodule segmentation by using the adjacent frames of the interaction layer as starting points through the propagation model so as to obtain a layer-by-layer segmentation result of each layer.
In an embodiment, the second three-dimensional lung nodule segmentation result generating unit 50 is further configured to:
acquiring a first key value of the two-dimensional pulmonary nodule medical image of the target layer and a second key value of the two-dimensional pulmonary nodule medical image of the segmented layer through an encoder;
processing the two-dimensional lung nodule medical image of the segmented layer through the encoder to obtain the segmented layer feature map;
generating a similarity matrix for representing the similarity between the target layer and the segmented layer according to the first key value and the second key value;
and acquiring the deep learning characteristic of the target layer according to the product of the similarity matrix and the segmented layer characteristic graph.
In an embodiment, the second three-dimensional lung nodule segmentation result generating unit 50 is further configured to:
respectively selecting two-dimensional feature maps corresponding to the target layer from each three-dimensional deep learning feature map output by the preset segmentation model;
and according to the size of the region of interest, reducing the two-dimensional feature map to a feature map with the size consistent with that of the two-dimensional lung image of the target layer, and taking the feature map as the three-dimensional feature of the lung nodule.
The location apriori information obtaining unit 20 is further configured to:
when the position prior information is input in a clicking mode, generating a first interactive information graph, and generating a Gaussian distribution thermodynamic diagram with a preset size in the first interactive information graph by taking a clicking position as a central point, wherein the pixel value of the central point in the first interactive information graph is 1, the pixel value of a boundary is 0, and the pixel value of a region outside the Gaussian distribution thermodynamic diagram is 0;
and when the position prior information is recorded in a scribing mode, generating a second interactive information graph, wherein the pixels of the scribing region in the second interactive information graph are 1, and the pixels of the rest regions are 0.
In an embodiment, the first three-dimensional lung nodule segmentation result generation unit 40 is further configured to:
taking the central point of the two-dimensional lung nodule segmentation result as the central point of the interest region;
and cutting out the three-dimensional medical image of the region where the lung nodule is located according to the maximum diameter of the lung nodule as the side length.
In an embodiment, the actual three-dimensional lung nodule segmentation result determination unit 60 is further configured to:
calculating a mean of the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result to determine a segmentation probability;
and when the segmentation probability is larger than a preset threshold value, the region is used as a lung nodule region, and the actual three-dimensional lung nodule segmentation result is output.
According to the method, the medical image is subjected to two-dimensional segmentation and three-dimensional segmentation by combining a two-dimensional segmentation network and a three-dimensional segmentation network, global information of a global image can be better learned through the two-dimensional segmentation, three-dimensional characteristics of pulmonary nodules can be better learned through the three-dimensional segmentation, the problem that the three-dimensional characteristics cannot be well utilized by the two-dimensional network alone and the global information cannot be well utilized by the three-dimensional network alone is effectively solved, the problem that a good cutting method cannot be selected under the prior condition without the target size is effectively solved according to the prior information of a user, the segmentation performance is effectively improved through a multi-branch combined training mode, and the problem of false positive existing in a full-automatic method is solved.
For specific definition of the device for automatically segmenting lung nodule based on deep learning, refer to the above definition of the method for automatically segmenting lung nodule based on deep learning, and will not be described herein again. The modules in the deep learning based semi-automatic lung nodule segmentation apparatus may be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure thereof may be as shown in fig. 6. The computer device comprises a processor, a memory and a network interface which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a readable storage medium. The readable storage medium stores computer readable instructions. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer readable instructions, when executed by a processor, implement a method for deep learning based semi-automatic segmentation of lung nodules. The readable storage media provided by the present embodiment include nonvolatile readable storage media and volatile readable storage media.
In an embodiment, there is provided a computer device comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor implementing the method for semi-automatic segmentation of lung nodules based on deep learning as described above when executing the computer readable instructions.
In an embodiment, a readable storage medium of computer readable instructions is provided, which when executed by one or more processors, cause the one or more processors to perform a method of deep learning based semi-automatic segmentation of lung nodules as described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware related to computer readable instructions, which may be stored in a non-volatile readable storage medium or a volatile readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (11)

1. A semi-automatic segmentation method for lung nodules based on deep learning is characterized by comprising the following steps:
acquiring three-dimensional lung medical image data to be segmented, and preprocessing and enhancing the data of the three-dimensional lung medical image data to be segmented;
acquiring position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, generating an interaction information map according to the position prior information,
inputting the two-dimensional lung medical image of the interaction layer and the interaction information map into a preset interaction model to generate a two-dimensional lung nodule segmentation result;
according to the two-dimensional lung nodule segmentation result, a three-dimensional medical image of a region where a lung nodule is located is cut out from the three-dimensional lung medical image to be segmented to serve as an interest region, and the interest region is input into a preset segmentation model to generate a first three-dimensional lung nodule segmentation result;
segmenting non-segmented layers of the three-dimensional lung medical image data to be segmented layer by layer through a preset propagation model to obtain a layer-by-layer segmentation result of each layer, and generating a second three-dimensional lung nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional lung nodule segmentation result;
and determining an actual three-dimensional lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result.
2. The method for semi-automatic segmentation of lung nodules based on deep learning of claim 1, wherein before inputting the two-dimensional lung medical image of the interaction layer and the interaction information map into a preset interaction model to generate a two-dimensional lung nodule segmentation result, the method comprises:
reconstructing an input layer of the interaction model;
the reconstruction process specifically includes:
modifying the two-dimensional medical image of the interaction layer and the image channel number of the interaction information image into a preset channel number;
summing the weights of the pre-trained input layers according to the channel direction to serve as the weight of a first image channel of the input layer of the interactive model;
the weights of the second image channel of the input layer of the interaction model are randomly initialized using a gaussian distribution.
3. The method for semi-automatic segmentation of lung nodules based on deep learning of claim 1, wherein the segmentation of the non-segmented layers of the lung medical image data layer by layer through a preset propagation model to obtain the segmentation result of each layer by layer comprises:
taking the layer of the two-dimensional lung medical image which is segmented currently as a target layer, and taking the layer of the two-dimensional lung medical image which is segmented by the interactive model or the propagation model as a segmented layer;
acquiring a segmented layer deep learning feature, a lung nodule three-dimensional feature and a target layer deep learning feature, and inputting the features into the propagation model;
and performing two-dimensional lung nodule segmentation layer by taking the adjacent frames of the interaction layer as starting points through the propagation model so as to obtain a layer-by-layer segmentation result of each layer.
4. The method as claimed in claim 3, wherein the target layer deep learning features are obtained by:
acquiring a first key value of the two-dimensional pulmonary nodule medical image of the target layer and a second key value of the two-dimensional pulmonary nodule medical image of the segmented layer through an encoder;
processing the two-dimensional lung nodule medical image of the segmented layer through the encoder to obtain a feature map of the segmented layer;
generating a similarity matrix according to the first key value and the second key value;
and acquiring the deep learning characteristic of the target layer according to the product of the similarity matrix and the segmented layer characteristic graph.
5. The method as claimed in claim 3, wherein the three-dimensional feature of the lung nodule is obtained by:
respectively selecting two-dimensional feature maps corresponding to the target layer from each three-dimensional deep learning feature map output by the preset segmentation model;
and according to the size of the region of interest, reducing the two-dimensional feature map to a feature map with the size consistent with that of the two-dimensional lung image of the target layer, and taking the feature map as the three-dimensional feature of the lung nodule.
6. The method for semi-automatic segmentation of lung nodules based on deep learning according to claim 1, wherein generating an interaction information map according to the position prior information comprises:
when the position prior information is input in a clicking mode, generating a first interactive information graph, and generating a Gaussian distribution thermodynamic diagram with a preset size in the first interactive information graph by taking a clicking position as a central point, wherein the pixel value of the central point in the first interactive information graph is 1, the pixel value of a boundary is 0, and the pixel value of a region outside the Gaussian distribution thermodynamic diagram is 0;
and when the position prior information is recorded in a scribing mode, generating a second interactive information graph, wherein pixels of a scribed region in the second interactive information graph are 1, and pixels of the rest regions are 0.
7. The method for semi-automatic segmentation of lung nodules based on deep learning according to claim 1, wherein the segmenting the three-dimensional medical image of the region where the lung nodule is located in the pulmonary medical image data according to the two-dimensional lung nodule segmentation result comprises:
taking the central point of the two-dimensional lung nodule segmentation result as the central point of the interest region;
and cutting out the three-dimensional medical image of the region where the lung nodule is located according to the maximum diameter of the lung nodule as the side length.
8. The method according to any one of claims 1 to 7, wherein the determining an actual lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result comprises:
calculating a mean of the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result to determine a segmentation probability;
and taking the region with the segmentation probability larger than a preset threshold value as a lung nodule region, and outputting the actual three-dimensional lung nodule segmentation result.
9. A device for semi-automatic segmentation of lung nodules based on deep learning, the device comprising:
the system comprises a three-dimensional lung medical image data acquisition unit, a data processing unit and a data processing unit, wherein the three-dimensional lung medical image data acquisition unit is used for acquiring three-dimensional lung medical image data to be segmented, and preprocessing and data enhancing the three-dimensional lung medical image data to be segmented;
a position prior information obtaining unit, configured to obtain position prior information in an interaction layer of the three-dimensional lung medical image data to be segmented, and generate an interaction information map according to the position prior information,
the two-dimensional pulmonary nodule segmentation result generation unit is used for inputting the two-dimensional pulmonary medical image of the interaction layer and the interaction information map into a preset interaction model so as to generate a two-dimensional pulmonary nodule segmentation result;
a first three-dimensional pulmonary nodule segmentation result generation unit, configured to cut out a three-dimensional medical image of a region where a pulmonary nodule is located in the three-dimensional pulmonary medical image to be segmented according to the two-dimensional pulmonary nodule segmentation result, so as to serve as an interest region, and input the interest region into a preset segmentation model, so as to generate a first three-dimensional pulmonary nodule segmentation result;
the second three-dimensional pulmonary nodule segmentation result generation unit is used for segmenting an unsegmented layer of the three-dimensional pulmonary medical image data to be segmented layer by layer through a preset propagation model so as to obtain a layer-by-layer segmentation result of each layer, and generating a second three-dimensional pulmonary nodule segmentation result according to the layer-by-layer segmentation result of each layer and the two-dimensional pulmonary nodule segmentation result;
and the actual three-dimensional lung nodule segmentation result determining unit is used for determining an actual three-dimensional lung nodule segmentation result according to the first three-dimensional lung nodule segmentation result and the second three-dimensional lung nodule segmentation result.
10. A computer device comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, wherein the processor when executing the computer readable instructions implements the deep learning based semi-automatic lung nodule segmentation method according to any one of claims 1 to 8.
11. A computer-readable storage medium of computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the method of deep learning based semi-automatic segmentation of lung nodules according to any of claims 1-8.
CN202210443111.4A 2022-04-25 2022-04-25 Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning Active CN114693671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210443111.4A CN114693671B (en) 2022-04-25 2022-04-25 Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210443111.4A CN114693671B (en) 2022-04-25 2022-04-25 Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning

Publications (2)

Publication Number Publication Date
CN114693671A true CN114693671A (en) 2022-07-01
CN114693671B CN114693671B (en) 2022-11-29

Family

ID=82144996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210443111.4A Active CN114693671B (en) 2022-04-25 2022-04-25 Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning

Country Status (1)

Country Link
CN (1) CN114693671B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351215A (en) * 2023-12-06 2024-01-05 上海交通大学宁波人工智能研究院 Artificial shoulder joint prosthesis design system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020006216A1 (en) * 2000-01-18 2002-01-17 Arch Development Corporation Method, system and computer readable medium for the two-dimensional and three-dimensional detection of lesions in computed tomography scans
US20190287242A1 (en) * 2018-03-16 2019-09-19 Infervision Computed tomography pulmonary nodule detection method based on deep learning
CN111127482A (en) * 2019-12-20 2020-05-08 广州柏视医疗科技有限公司 CT image lung trachea segmentation method and system based on deep learning
CN111768382A (en) * 2020-06-30 2020-10-13 重庆大学 Interactive segmentation method based on lung nodule growth form
CN112258530A (en) * 2020-12-21 2021-01-22 四川大学 Neural network-based computer-aided lung nodule automatic segmentation method
CN112991269A (en) * 2021-02-07 2021-06-18 复旦大学 Identification and classification method for lung CT image
CN113971728A (en) * 2021-10-25 2022-01-25 北京百度网讯科技有限公司 Image recognition method, model training method, device, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020006216A1 (en) * 2000-01-18 2002-01-17 Arch Development Corporation Method, system and computer readable medium for the two-dimensional and three-dimensional detection of lesions in computed tomography scans
US20190287242A1 (en) * 2018-03-16 2019-09-19 Infervision Computed tomography pulmonary nodule detection method based on deep learning
CN111127482A (en) * 2019-12-20 2020-05-08 广州柏视医疗科技有限公司 CT image lung trachea segmentation method and system based on deep learning
CN111768382A (en) * 2020-06-30 2020-10-13 重庆大学 Interactive segmentation method based on lung nodule growth form
CN112258530A (en) * 2020-12-21 2021-01-22 四川大学 Neural network-based computer-aided lung nodule automatic segmentation method
CN112991269A (en) * 2021-02-07 2021-06-18 复旦大学 Identification and classification method for lung CT image
CN113971728A (en) * 2021-10-25 2022-01-25 北京百度网讯科技有限公司 Image recognition method, model training method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
秦航宇 等: "基于无监督学习的数字病理切片自动分割方法", 《四川大学学报(医学版)》 *
马金林等: "基于深度迁移学习的肺结节分割方法", 《计算机应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351215A (en) * 2023-12-06 2024-01-05 上海交通大学宁波人工智能研究院 Artificial shoulder joint prosthesis design system and method
CN117351215B (en) * 2023-12-06 2024-02-23 上海交通大学宁波人工智能研究院 Artificial shoulder joint prosthesis design system and method

Also Published As

Publication number Publication date
CN114693671B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
US11488021B2 (en) Systems and methods for image segmentation
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN111640120A (en) Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
CN112435263A (en) Medical image segmentation method, device, equipment, system and computer storage medium
CN109919254B (en) Breast density classification method, system, readable storage medium and computer device
CN116097302A (en) Connected machine learning model with joint training for lesion detection
CN112561869B (en) Pancreatic neuroendocrine tumor postoperative recurrence risk prediction method
Popescu et al. Retinal blood vessel segmentation using pix2pix gan
CN112750137A (en) Liver tumor segmentation method and system based on deep learning
CN113706486A (en) Pancreas tumor image segmentation method based on dense connection network migration learning
CN114332132A (en) Image segmentation method and device and computer equipment
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN110992312B (en) Medical image processing method, medical image processing device, storage medium and computer equipment
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN116128895A (en) Medical image segmentation method, apparatus and computer readable storage medium
CN115375787A (en) Artifact correction method, computer device and readable storage medium
CN112862785B (en) CTA image data identification method, device and storage medium
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement
CN114004795A (en) Breast nodule segmentation method and related device
CN115578400A (en) Image processing method, and training method and device of image segmentation network
CN112990367A (en) Image processing method, device, equipment and storage medium
CN111815569A (en) Image segmentation method, device and equipment based on deep learning and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant