CN112330640A - Segmentation method, device and equipment for nodule region in medical image - Google Patents

Segmentation method, device and equipment for nodule region in medical image Download PDF

Info

Publication number
CN112330640A
CN112330640A CN202011238342.9A CN202011238342A CN112330640A CN 112330640 A CN112330640 A CN 112330640A CN 202011238342 A CN202011238342 A CN 202011238342A CN 112330640 A CN112330640 A CN 112330640A
Authority
CN
China
Prior art keywords
nodule
network
segmentation
weight
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011238342.9A
Other languages
Chinese (zh)
Inventor
陈超
卢沁阳
黄凌云
刘玉宇
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011238342.9A priority Critical patent/CN112330640A/en
Publication of CN112330640A publication Critical patent/CN112330640A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Abstract

The application discloses a method, a device and equipment for segmenting a nodule region in a medical image, which relate to the technical field of medical treatment and can solve the problems that when the nodule region is segmented in the medical image at present, the segmentation accuracy is low, the nodule region cannot be segmented efficiently, and the requirement of clinical diagnosis cannot be met. The method comprises the following steps: preprocessing a sample image; calculating a weight mask corresponding to the preprocessed sample image; training and generating a nodule segmentation model which accords with a preset training standard based on the weight mask and the BESNet network; and segmenting the target image by using the nodule segmentation model to obtain a nodule segmentation result of the target image. The method and the device are suitable for accurately segmenting the junction region in the medical image.

Description

Segmentation method, device and equipment for nodule region in medical image
Technical Field
The present application relates to the field of medical technology, and in particular, to a method, an apparatus, and a device for segmenting a nodule region in a medical image.
Background
A medical image is an image of internal tissue obtained non-invasively over the whole or a part of a human or an animal for medical treatment or medical research. Among them, Computed Tomography (CT) is a radiation diagnosis technique based on different substances having different attenuation properties to radiation. The CT uses radioactive rays to irradiate a measured object from all directions, measures the intensity of the rays passing through the object, and calculates the linear attenuation coefficient of each point substance in the object to the rays through a certain reconstruction algorithm, thereby obtaining the tomographic image of the measured object. CT reconstructed tomographic images have the advantages of no image overlap, high density and spatial resolution, and the like, and therefore, the CT reconstructed tomographic images are of great interest as medical nondestructive diagnosis technologies.
The CT technique can scan the brain, chest, abdomen, spine, limbs, etc. and the scanned image is used for the auxiliary analysis of diseases. In the treatment process, the medical image acquired by the CT technology can be applied to the segmentation of the nodule, the nodule is a description language for imaging examination and can comprise thyroid nodule, breast nodule, body surface skin nodule, lung nodule, liver nodule and the like, and the accurate segmentation result of the nodule can effectively reflect the pathology and morphological characteristics of the nodule, thereby helping doctors to diagnose and analyze the focus.
Currently, when segmenting a nodule region in a medical image, most of existing algorithms firstly obtain an approximate contour of the nodule in a CT image by using methods such as threshold segmentation, region growing, edge detection and the like, and then correct a segmentation result to make up for a missing part of lung parenchyma caused by various lesions. However, the segmentation method has low segmentation accuracy, cannot efficiently segment the nodule region, and cannot meet the requirement of clinical diagnosis.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, and a device for segmenting a nodule region in a medical image, and mainly aims to solve the problems that when a nodule region in a medical image is segmented at present, segmentation accuracy is low, the nodule region cannot be segmented efficiently, and clinical diagnosis requirements cannot be met.
According to an aspect of the present application, there is provided a segmentation method for a nodule region in a medical image, the method comprising:
preprocessing a sample image;
calculating a weight mask corresponding to the preprocessed sample image;
training and generating a nodule segmentation model which accords with a preset training standard based on the weight mask and the BESNet network;
and segmenting the target image by using the nodule segmentation model to obtain a nodule segmentation result of the target image.
Preferably, the preprocessing the sample image includes:
determining a closed curve corresponding to the sample image, wherein the closed curve comprises grid points corresponding to pixels to be segmented;
and initializing the distance value of each grid point to obtain an initial distance function value.
Preferably, the calculating a weight mask corresponding to the preprocessed sample image includes:
determining a target distance function value corresponding to each pixel point in the closed curve based on the initial distance function value by performing scanning processing in four directions on each grid point;
generating a weight mask corresponding to the sample image according to the target distance function value;
the determining, based on the initial distance function value, a target distance function value corresponding to each pixel point in the closed curve by performing scanning processing in four directions on each of the grid points, includes:
determining a target grid point of current scanning and a corresponding scanning direction;
comparing the target grid point with two adjacent grid points in the scanning reverse direction to determine a first distance function value corresponding to the target grid point;
and determining the minimum value of the first distance function values corresponding to the four directions as the target distance function value of the target grid point.
Preferably, the generating of the nodule segmentation model conforming to the preset training standard based on the weight mask and the BESNet network training includes:
determining the BESNet network as a first split network;
masking the network weight of the first segmentation network through the weight mask to obtain a second segmentation network;
constructing a nodule segmentation model using the first segmentation network and the second segmentation network;
training the nodule segmentation model based on the sample images;
and if the loss function value of the nodule model is smaller than a preset threshold value, judging that the nodule segmentation model meets a preset training standard.
Preferably, the masking the network weights of the first segmentation network by the weight mask to obtain a second segmentation network includes:
acquiring a first weight array corresponding to the first segmentation network, wherein the first weight array comprises first network weights corresponding to convolution kernels in the first segmentation network;
masking the first weight array through the weight mask to obtain a second weight array, wherein the weight mask comprises second network weights corresponding to the convolution kernels;
and generating a second segmentation network according to the second weight array.
Preferably, the masking the first weight array by the weight mask to obtain a second weight array, where the weight mask includes second network weights corresponding to the convolution kernels, and the masking includes:
performing binarization processing on the weight mask through a threshold function to generate a binarization mask, wherein the binarization mask is an array formed by 0 and 1, and the size of the binarization mask is the same as that of the first weight array;
multiplying the first weight array and the binarization mask point to obtain a second weight array;
said generating a second segmented network from said second array of weights comprises:
updating the network weight of each convolution kernel in the first segmentation network according to the second weight array;
and determining the first segmentation network after the network weight is updated as a second segmentation network.
Preferably, the training the nodule segmentation model based on the sample image comprises:
first training the nodule segmentation model based on the second segmentation network and the sample images in a first training stage;
second training the nodule segmentation model based on the first segmentation network and the sample images in a second training phase after the first training phase;
if the loss function value of the nodule model is smaller than a preset threshold value, the nodule segmentation model is judged to accord with a preset training standard, and the method comprises the following steps:
calculating a first loss value for the first training phase based on a first loss function, and calculating a second loss value for the second training phase based on a second loss function;
when the first loss value is smaller than a first preset threshold value and the second loss value is smaller than a second preset threshold value, judging that the nodule segmentation model meets a preset training standard;
the segmenting of the target image by using the nodule segmentation model to obtain the nodule segmentation result of the target image comprises the following steps:
and inputting the target image into the nodule segmentation model conforming to the preset training standard, and obtaining a segmented target nodule region of the target image.
According to another aspect of the present application, there is provided an apparatus for segmenting a nodule region in a medical image, the apparatus comprising:
the processing module is used for preprocessing the sample image;
the calculation module is used for calculating a weight mask corresponding to the preprocessed sample image;
the training module is used for generating a nodule segmentation model which accords with a preset training standard based on the weight mask and the BESNet network training;
and the segmentation module is used for segmenting the target image by using the nodule segmentation model and acquiring a nodule segmentation result of the target image.
Preferably, the processing module includes:
the first determining unit is used for determining a closed curve corresponding to the sample image, wherein the closed curve comprises grid points corresponding to pixels to be segmented;
and the first processing unit is used for carrying out initialization processing on the distance value of each grid point and acquiring an initial distance function value.
Preferably, the calculation module includes:
a second determining unit configured to determine, based on the initial distance function value, a target distance function value corresponding to each pixel point in the closed curve by performing scanning processing in four directions for each of the grid points;
the generating unit is used for generating a weight mask corresponding to the sample image according to the target distance function value;
preferably, the second determining unit is specifically configured to:
determining a target grid point of current scanning and a corresponding scanning direction;
comparing the target grid point with two adjacent grid points in the scanning reverse direction to determine a first distance function value corresponding to the target grid point;
and determining the minimum value of the first distance function values corresponding to the four directions as the target distance function value of the target grid point.
Preferably, the training module comprises:
a third determination unit configured to determine the BESNet network as the first split network;
the second processing unit is used for masking the network weight of the first segmentation network through the weight mask to obtain a second segmentation network;
a construction unit configured to construct a nodule segmentation model using the first segmentation network and the second segmentation network;
a training unit for training the nodule segmentation model based on the sample image;
and the judging unit is used for judging that the nodule segmentation model accords with a preset training standard if judging that the loss function value of the nodule model is smaller than a preset threshold value.
Preferably, the second processing unit is specifically configured to:
acquiring a first weight array corresponding to the first segmentation network, wherein the first weight array comprises first network weights corresponding to convolution kernels in the first segmentation network;
masking the first weight array through the weight mask to obtain a second weight array, wherein the weight mask comprises second network weights corresponding to the convolution kernels;
and generating a second segmentation network according to the second weight array.
Preferably, the second processing unit is specifically configured to:
performing binarization processing on the weight mask through a threshold function to generate a binarization mask, wherein the binarization mask is an array formed by 0 and 1, and the size of the binarization mask is the same as that of the first weight array;
multiplying the first weight array and the binarization mask point to obtain a second weight array;
preferably, the second processing unit is specifically configured to:
updating the network weight of each convolution kernel in the first segmentation network according to the second weight array;
and determining the first segmentation network after the network weight is updated as a second segmentation network.
Preferably, the training unit is specifically configured to:
first training the nodule segmentation model based on the second segmentation network and the sample images in a first training stage;
second training the nodule segmentation model based on the first segmentation network and the sample images in a second training phase after the first training phase;
preferably, the determination unit is specifically configured to:
calculating a first loss value for the first training phase based on a first loss function, and calculating a second loss value for the second training phase based on a second loss function;
when the first loss value is smaller than a first preset threshold value and the second loss value is smaller than a second preset threshold value, judging that the nodule segmentation model meets a preset training standard;
preferably, the segmentation module specifically includes:
and the input unit is used for inputting the target image into the nodule segmentation model which accords with the preset training standard, and acquiring the segmented target nodule region of the target image.
According to yet another aspect of the application, a non-transitory readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the above method for segmentation of a nodule region in a medical image.
According to yet another aspect of the present application, there is provided a computer device comprising a non-volatile readable storage medium, a processor and a computer program stored on the non-volatile readable storage medium and executable on the processor, the processor implementing the above method for segmentation of a nodule region in a medical image when executing the program.
By means of the technical scheme, the method, the device and the equipment for segmenting the nodule region in the medical image can be used for firstly processing and calculating to obtain the image mask corresponding to the sample image, and generating the nodule segmentation model meeting the preset training standard based on the image mask and the BESNet network training, so that the nodule region in the target image can be accurately segmented based on the nodule segmentation model. Since the function of the BESNet network on boundary segmentation is obvious, but the boundary is not the most important part in nodule segmentation, some gold standard contours are not completely accurate, some nodules accompany with rear sound shadows, the determination of the contours of the nodules is very difficult, the actual significance of pursuing accurate segmentation of the nodules is not large, and the weight mask can strengthen the weight representing an approximate contour region, so that in the application, a more accurate nodule segmentation model can be obtained through combining the image mask and the BESNet network and fusion training, the nodule segmentation result can be more accurate, and the requirement of clinical diagnosis can be met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application to the disclosed embodiment. In the drawings:
fig. 1 is a flowchart illustrating a segmentation method for a nodule region in a medical image according to an embodiment of the present application;
fig. 2 is a flowchart illustrating another segmentation method for a nodule region in a medical image according to an embodiment of the present application;
fig. 3 is a schematic structural diagram illustrating a segmentation apparatus for a nodule region in a medical image according to an embodiment of the present application;
fig. 4 is a schematic structural diagram illustrating another segmentation apparatus for a nodule region in a medical image according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The method aims to solve the problems that when a nodule region in a medical image is segmented at present, segmentation accuracy is low, the nodule region cannot be segmented efficiently, and the requirement of clinical diagnosis cannot be met. An embodiment of the present application provides a method for segmenting a nodule region in a medical image, as shown in fig. 1, the method includes:
101. and preprocessing the sample image.
For this embodiment, in a specific application scenario, in order to avoid the influence of noise on the sample image, data smoothing may be performed on the sample image first, and in order to smooth the image, a gaussian filter may be used to perform convolution with the sample image. In addition, in order to calculate the weight mask corresponding to the sample image conveniently, the sample image may be labeled with a closed curve and subjected to gridding processing, and grid points corresponding to pixels in the closed curve are further obtained.
The execution main body can be a segmentation system for segmenting a nodule region in a medical image, a nodule segmentation model built by a weight mask and a BESNet network is built in the system, the nodule segmentation model can be trained by using a sample image, and further after the nodule segmentation model is judged to be in accordance with a preset training standard, the nodule segmentation model is used for segmenting a target image to be subjected to nodule segmentation, and a nodule segmentation result is further obtained.
102. And calculating a weight mask corresponding to the preprocessed sample image.
For this embodiment, the segmentation of the approximate contour of the junction can be enhanced by calculating the weight mask corresponding to the sample image. Specifically, by using a method of a Symbolic Distance Function (SDF) in a level set algorithm for reference, a weighting mask of the SDF is constructed for each image, so that the region closer to the boundary of the nodule has lower weight, and the region closer to the center region or the outer region of the nodule has higher weight, that is, the boundary is weakened.
103. And generating a nodule segmentation model which accords with a preset training standard based on the weight mask and the BESNet network training.
For the present embodiment, in a specific application scenario, the BESNet network emphasizes fine segmentation of nodule boundaries because the SDF weight mask emphasizes rough segmentation of nodule contours. The present document focuses on the BESNet network and combines SDF weight masks through the selection of batches in the training process. In the scheme provided by the application, attention is preferably paid to the segmentation of the approximate contour of the nodule, for example, the weight can be calculated by multiplying an SDF weight mask when the weight is calculated in the first 20 epochs (one epoch is trained once corresponding to all sample images in a training set), that is, each pixel in the image is weighted, and meanwhile, the BDP branch loss in the BESNet is not calculated, and the weighting of the BDP is not included in the MDP branch loss function; after 20 epochs, the BESNet penalty calculation method described earlier is used, at which time the SDF weight mask is no longer used.
104. And segmenting the target image by using the nodule segmentation model to obtain a nodule segmentation result of the target image.
For this embodiment, in a specific application scenario, after it is determined that the segmentation of the nodule segmentation model meets a preset training standard, the nodule segmentation model may be put into application, and the corresponding nodule segmentation result may be output by the nodule segmentation model by inputting the target image to be segmented into the nodule segmentation model.
By the segmentation method of the nodule region in the medical image in the embodiment, the image mask corresponding to the sample image can be obtained through processing and calculation, and the nodule segmentation model conforming to the preset training standard is generated based on the image mask and the BESNet network training, so that the nodule region in the target image can be accurately segmented based on the nodule segmentation model. Since the function of the BESNet network on boundary segmentation is obvious, but the boundary is not the most important part in nodule segmentation, some gold standard contours are not completely accurate, some nodules accompany with rear sound shadows, the determination of the contours of the nodules is very difficult, the actual significance of pursuing accurate segmentation of the nodules is not large, and the weight mask can strengthen the weight representing an approximate contour region, so that in the application, a more accurate nodule segmentation model can be obtained through combining the image mask and the BESNet network and fusion training, the nodule segmentation result can be more accurate, and the requirement of clinical diagnosis can be met.
Further, as a refinement and an extension of the specific implementation of the above embodiment, for fully explaining the implementation process in the present embodiment, another segmentation method for a nodule region in a medical image is provided, as shown in fig. 2, the method includes:
201. and preprocessing the sample image.
For the present embodiment, in a specific application scenario, the step 201 of the embodiment may specifically include: determining a closed curve corresponding to the sample image, wherein the closed curve comprises grid points corresponding to pixels to be segmented; and initializing the distance value of each grid point to obtain an initial distance function value.
Specifically, when initializing the distance value of each grid point, the distance function value of a point on the contour line corresponding to the closed curve may be initialized to 0, the distance function values of adjacent points may be initialized to 1, and the distance function values of all other points may be initialized to be infinite.
202. And calculating a weight mask corresponding to the preprocessed sample image.
For the present embodiment, in a specific application scenario, the embodiment step 202 may specifically include: scanning in four directions is carried out on each grid point, and a target distance function value corresponding to each pixel point in the closed curve is determined based on the initial distance function value; and generating a weight mask corresponding to the sample image according to the target distance function value.
Correspondingly, when the target distance function value corresponding to each pixel point in the closed curve is determined based on the initial distance function value by performing scanning processing in four directions on each grid point, the embodiment steps may specifically include: determining a target grid point of current scanning and a corresponding scanning direction; comparing a target grid point with two adjacent grid points in the scanning reverse direction to obtain an initial distance function value, and determining a first distance function value corresponding to the target grid point; and determining the minimum value of the first distance function values corresponding to the four directions as the target distance function value of the target grid point.
Wherein, the scanning process in four directions is respectively as follows:
(1) in the first scan, the direction (left- > right, up- > down) of x +, y + takes the minimum value of the current point and the left and up points of the current point plus 1, and marks the state matrix as 1. Specifically, when the first scanning is performed, for each grid point E (i, j), the distance function values of the left and upper adjacent points (i-1, j) and (i, j-1) are compared, and if the smaller distance function value min plus 1 is smaller than the distance function value of the scanning point E (i, j), the distance function value of the scanning point E (i, j) is set to min +1, otherwise, the distance function value of the scanning point E (i, j) is unchanged.
(2) In the second scanning, the x-, y + (right- > left, up- > down) direction takes the minimum value of the current point and the sum of 1 of the right and up points of the current point, and marks the state matrix as 1. Specifically, when the second scanning is performed, for each grid point E (i, j), the distance function values of the left and lower adjacent points (i +1, j) and (i, j-1) are compared, and if the smaller distance function value min plus 1 is smaller than the distance function value of the scanning point E (i, j), the distance function value of the scanning point E (i, j) is set to be min +1, otherwise, the distance function value of the scanning point E (i, j) is unchanged.
(3) In the third scanning, the x +, y- (left- > right and lower- > up) direction takes the minimum value of the current point and the left and lower points of the current point plus 1, and marks the state matrix as 1. Specifically, when the third scanning is performed, for each grid point E (i, j), the distance function values of the right and upper adjacent points (i-1, j) and (i, j +1) are compared, and if the smaller distance function value min plus 1 is smaller than the distance function value of the scanning point E (i, j), the distance function value of the scanning point E (i, j) is set to be min +1, otherwise, the distance function value of the scanning point E (i, j) is unchanged.
(3) In the fourth scan, the x-, y-direction (right- > left, lower- > up) takes the minimum of the current point and the right and lower points of the current point plus 1, and marks the state matrix as 1. Specifically, when the fourth scanning is performed, for each grid point E (i, j), the distance function values of the right and lower adjacent points (i +1, j) and (i, j +1) are compared, and if the smaller distance function value min plus 1 is smaller than the distance function value of the scanning point E (i, j), the distance function value of the scanning point E (i, j) is set to min +1, otherwise, the distance function value of the scanning point E (i, j) is unchanged.
In this way, after the four directions are scanned, each grid point is compared with four adjacent points thereof, the minimum distance value is taken, the minimum global distance value of all the grid points is obtained finally, and further, the minimum distance function value of each grid point can be determined as the weight mask corresponding to the sample image.
203. The BESNet network is determined as a first split network.
For the embodiment, since the present application relates to the fusion training of the weight mask and the BESNet network, and the BESNet network is used as the main body, when the knot segmentation model is trained, the BESNet network may be determined as the first segmentation network first.
204. And masking the network weight of the first segmentation network through a weight mask to obtain a second segmentation network.
In the convolutional neural network, each unit of a convolutional kernel corresponds to a respective network weight, and the network weight is obtained through network training. Taking a convolution kernel of 3 × 3 as an example, the convolution kernel contains 9 cells, and accordingly, there are 9 network weights in the convolution kernel. When pixels in the image are subjected to convolution processing (namely, the image is subjected to feature extraction by utilizing the convolution kernel), namely, after pixel values are multiplied by corresponding network weights in the convolution kernel, products are added and output; the mask in the embodiment of the present application is used to screen the network weights of the convolution kernel. When the convolution kernel is masked by using the mask, the passing rate of the network weight insensitive to the image characteristic distribution is set to be higher than that of the network weight sensitive to the image characteristic distribution, so that the effect of screening out the network weight insensitive to the image characteristic distribution is achieved. Optionally, the mask may be a real number mask or a binarization mask, where the binarization mask is obtained by performing binarization processing on the real number mask.
For the present embodiment, in a specific application scenario, the step 204 of the embodiment may specifically include: acquiring a first weight array corresponding to a first segmentation network, wherein the first weight array comprises first network weights corresponding to convolution kernels in the first segmentation network; masking the first weight array through a weight mask to obtain a second weight array, wherein the weight mask comprises second network weights corresponding to the convolution kernels; a second segmented network is generated according to the second weight array.
Correspondingly, when the first weight array is masked by the weight mask to obtain a second weight array, so that the weight mask includes the second network weights corresponding to the convolution kernels, the embodiment step 204 may specifically include: performing binarization processing on the weight mask through a threshold function to generate a binarization mask, wherein the binarization mask is an array formed by 0 and 1, and the size of the binarization mask is the same as that of the first weight array; and multiplying the first weight array and the binarization mask point to obtain a second weight array.
In the embodiment of the present application, the weight mask has the same network structure as the first segmentation network, and the weight mask has the same number as the network weights in the first segmentation network. Thus, in an alternative embodiment, the computer device obtains the network weights corresponding to each convolution kernel in the weight mask to generate a real number mask that is consistent with the first weight array size. Optionally, the real mask is a weight matrix composed of network weights (in the mask network). For the process of mask processing, optionally, the computer device multiplies the real number mask by the point-to-point in the first weight array to obtain a second weight array. The point-to-point multiplication refers to multiplying the net weight of the ith row and the jth column in the first weight array and the mask value of the ith row and the jth column in the binary mask. Illustratively, the first weight array corresponding to the first segmentation network is W s, the real mask corresponding to the mask network is M real, and the second weight array obtained after the masking process is W s × M real. In the implementation process, it is found that the effect of directly performing the masking process on the first weight array by using the real number mask is not good, and therefore, in a possible implementation manner, the computer device performs the binarization process on the real number mask (to achieve the filtering effect), and then performs the masking process by using the real number mask after the binarization process.
In a specific application scenario, when generating the second segmentation network according to the second weight array, the embodiment step 204 may specifically include: updating the network weight of each convolution kernel in the first segmentation network according to the second weight array; and determining the first segmentation network after the network weight is updated as a second segmentation network.
For this embodiment, it should be noted that, since the network weight of the first segmentation network is already fixed, after the subsequent mask network is updated, the computer device may perform mask processing on the first weight array according to the updated binarization mask, so as to obtain the second weight array.
205. And constructing a nodule segmentation model by using the first segmentation network and the second segmentation network.
In this embodiment, when constructing the nodule segmentation model, as a preferable mode, the second segmentation network may be set in the first half-stage training of the nodule segmentation model, and the first segmentation network may be set in the second half-stage training of the nodule segmentation model. For example, when a total of 40 epochs are involved, the first 20 epochs can be trained based on the second split network and the second 20 epochs can be trained based on the first split network.
206. A nodule segmentation model is trained based on the sample images.
For the present embodiment, in a specific application scenario, the embodiment step 206 may specifically include: performing first training on a nodule segmentation model based on a second segmentation network and a sample image in a first training stage; in a second training phase, subsequent to the first training phase, the nodule segmentation model is second trained based on the first segmentation network and the sample images.
For this embodiment, during the first training, as a preferable mode, the first 20 epochs may be multiplied by the SDF weight mask when calculating the weights, and meanwhile, BDP branch loss in the BESNet network is not calculated, and the MDP branch loss function does not include the weight of the BDP, and only the parameter values on the MDP-related path are propagated backward and optimized, that is, the convolution of the BDP uppermost layer always maintains the initialized value. In the second training, for example, starting from the 21 st epoch, the BESNet loss calculation method is used alone, and the SDF weight mask is not used, and the two branches BDP and MDP are considered when training the network.
207. And if the loss function value of the nodule model is judged to be smaller than the preset threshold value, judging that the nodule segmentation model meets the preset training standard.
For this embodiment, in a specific application scenario, the step 207 of the embodiment may specifically include: calculating a first loss value for the first training phase based on the first loss function, and calculating a second loss value for the second training phase based on the second loss function; and when the first loss value is smaller than a first preset threshold value and the second loss value is smaller than a second preset threshold value, judging that the nodule segmentation model meets a preset training standard.
Wherein, the first loss function can adopt a self-defined cross entropy loss function (1) weighted by SDF:
Figure BDA0002767533390000131
the second loss function may employ loss functions (2), (3):
Figure BDA0002767533390000132
b(x)=αmax(β-pG(x),0)#(3)
wherein g (x) in formula (2) is 0 or 1, which represents the background pixel and the target pixel in the segmentation gold standard, respectively. It can be seen that this is actually adding the weight b (x) in the positive part of the cross entropy. And from equation (3), only if pG(x)<β, b (x) is greater than 0, and the MDP branch is weighted by the BDP branch; the role of α is to control the size range of the weight, and the super parameter setting in this patent is α ═ 0.5, and β ═ 0.1, which means that the weighting is only performed when the output confidence of the boundary region is less than 0.1, and the maximum value of this weighting is 0.05. BDP is at the boundaryWhen the output confidence of the region is less than 0.1, the segmentation of the boundary of the part is extremely difficult, and the weighting of the region conforms to the idea that the network enhances the boundary.
208. And inputting the target image into a nodule segmentation model meeting a preset training standard, and acquiring a segmented target nodule region of the target image.
For this embodiment, after the nodule segmentation model meeting the preset training standard is obtained through training, the target image may be input into the nodule segmentation model, and the nodule segmentation model may implement segmentation processing on the target image based on the combination of the first segmentation network and the second segmentation network, so as to further obtain a segmented target nodule region.
According to the segmentation method of the nodule region in the medical image, firstly, the image mask corresponding to the sample image is obtained through processing and calculation, and the nodule segmentation model which meets the preset training standard is generated based on the image mask and BESNet network training, so that the nodule region in the target image can be accurately segmented based on the nodule segmentation model. Since the function of the BESNet network on boundary segmentation is obvious, but the boundary is not the most important part in nodule segmentation, some gold standard contours are not completely accurate, some nodules accompany with rear sound shadows, the determination of the contours of the nodules is very difficult, the actual significance of pursuing accurate segmentation of the nodules is not large, and the weight mask can strengthen the weight representing an approximate contour region, so that in the application, a more accurate nodule segmentation model can be obtained through combining the image mask and the BESNet network and fusion training, the nodule segmentation result can be more accurate, and the requirement of clinical diagnosis can be met.
Through the practical application of the technical scheme, the generated nodule segmentation model has larger increase on various indexes of nodule segmentation, and the results of 16 th epoch and 39 th epoch of the nodule segmentation model in the application are compared, so that the result can be obtained, when the second network weight is used, the model is really more inclined to segment the whole nodule, and the DICE is more easily influenced by the TP part of the image segmentation, so that the DICE is higher at the moment. When the model is trained to 39 th epoch, although the DICE index is reduced, the IOU is greatly improved, the Hausdorff distance is better, which reflects that the segmentation details at the boundary are more complicated as the emphasis of the model is biased to the fine segmentation of the boundary, so that the TP part is slightly reduced, and the DICE is slightly reduced; but since the boundary segmentation is better, both the IOU and the hausdorff distance become better accordingly. The final DICE indexes of the model are less than two points apart, the IOU index achieves the highest performance close to 0.9, and the Hausdorff distance also achieves the best index.
Further, as a specific implementation of the method shown in fig. 1 and fig. 2, an embodiment of the present application provides a segmentation apparatus for a junction region in a medical image, as shown in fig. 3, the apparatus includes: a processing module 31, a calculating module 32, a training module 33 and a segmentation module 34;
a processing module 31, operable to pre-process the sample image;
a calculating module 32, configured to calculate a weight mask corresponding to the preprocessed sample image;
a training module 33, configured to generate a nodule segmentation model meeting a preset training standard based on the weight mask and the BESNet network training;
and the segmentation module 34 may be configured to segment the target image by using the nodule segmentation model, and obtain a nodule segmentation result of the target image.
In a specific application scenario, as shown in fig. 4, the processing module 31 may specifically include: a determination unit 311, a first processing unit 312;
a first determining unit 311, configured to determine a closed curve corresponding to the sample image, where the closed curve includes grid points corresponding to pixels to be segmented;
the first processing unit 312 may be configured to perform initialization processing on the distance values of the grid points, and obtain initial distance function values.
Correspondingly, as shown in fig. 4, the calculating module 32 may specifically include: a second determination unit 321, a generation unit 322;
a second determining unit 321 operable to determine a target distance function value corresponding to each pixel point in the closed curve based on the initial distance function value by performing scanning processing in four directions for each grid point;
a generating unit 322, configured to generate a weight mask corresponding to the sample image according to the target distance function value;
in a specific application scenario, the second determining unit 321 may specifically be configured to: determining a target grid point of current scanning and a corresponding scanning direction; comparing a target grid point with two adjacent grid points in the scanning reverse direction to obtain an initial distance function value, and determining a first distance function value corresponding to the target grid point; and determining the minimum value of the first distance function values corresponding to the four directions as the target distance function value of the target grid point.
Correspondingly, as shown in fig. 4, the training module 33 may specifically include: a third determination unit 331, a second processing unit 332, a construction unit 333, a training unit 334, a decision unit 335;
a third determining unit 331 operable to determine the BESNet network as the first split network;
the second processing unit 332 is configured to perform mask processing on the network weights of the first segmentation network through a weight mask to obtain a second segmentation network;
a construction unit 333 operable to construct a nodule segmentation model using the first segmentation network and the second segmentation network;
a training unit 334 operable to train a nodule segmentation model based on the sample image;
the determining unit 335 may be configured to determine that the nodule segmentation model meets the preset training standard if it is determined that the loss function value of the nodule model is smaller than the preset threshold.
In a specific application scenario, the second processing unit 332 may be specifically configured to obtain a first weight array corresponding to the first segmentation network, where the first weight array includes first network weights corresponding to each convolution kernel in the first segmentation network; masking the first weight array through a weight mask to obtain a second weight array, wherein the weight mask comprises second network weights corresponding to the convolution kernels; a second segmented network is generated according to the second weight array.
Correspondingly, when the first weight array is masked by the weight mask to obtain the second weight array, the second processing unit 332 is specifically configured to perform binarization processing on the weight mask by using a threshold function to generate a binarization mask, where the binarization mask is an array formed by 0 and 1, and the size of the binarization mask is the same as that of the first weight array; and multiplying the first weight array and the binarization mask point to obtain a second weight array.
In a specific application scenario, when a second segmentation network is generated according to the second weight array, the second processing unit 332 may be specifically configured to update the network weight of each convolution kernel in the first segmentation network according to the second weight array; and determining the first segmentation network after the network weight is updated as a second segmentation network.
Correspondingly, the training unit 334 is specifically configured to perform a first training on the nodule segmentation model based on the second segmentation network and the sample image in a first training stage; in a second training phase, subsequent to the first training phase, the nodule segmentation model is second trained based on the first segmentation network and the sample images.
In a specific application scenario, the determining unit 335 is specifically configured to calculate a first loss value of the first training phase based on the first loss function, and calculate a second loss value of the second training phase based on the second loss function; and when the first loss value is smaller than a first preset threshold value and the second loss value is smaller than a second preset threshold value, judging that the nodule segmentation model meets a preset training standard.
Correspondingly, as shown in fig. 4, the segmentation module 34 may specifically include: an input unit 341;
the input unit 341 is configured to input the target image into a nodule segmentation model meeting a preset training standard, and obtain a target nodule region obtained by segmenting the target image.
It should be noted that other corresponding descriptions of the functional units related to the segmentation apparatus for a joint region in a medical image provided in this embodiment may refer to the corresponding descriptions in fig. 1 to fig. 2, and are not repeated herein.
Based on the method shown in fig. 1 to 2, correspondingly, the present embodiment further provides a non-volatile storage medium, on which computer readable instructions are stored, and the readable instructions, when executed by a processor, implement the method for segmenting the junction region in the medical image shown in fig. 1 to 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments of the present application.
Based on the method shown in fig. 1 to fig. 2 and the virtual device embodiments shown in fig. 3 and fig. 4, in order to achieve the above object, the present embodiment further provides a computer device, where the computer device includes a storage medium and a processor; a nonvolatile storage medium for storing a computer program; a processor for executing a computer program for implementing the above method for segmenting the nodal region in a medical image as illustrated in fig. 1-2.
Optionally, the computer device may further include a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, a sensor, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be understood by those skilled in the art that the present embodiment provides a computer device structure that is not limited to the physical device, and may include more or less components, or some components in combination, or a different arrangement of components.
The nonvolatile storage medium can also comprise an operating system and a network communication module. The operating system is a program that manages the hardware and software resources of the computer device described above, supporting the operation of information handling programs and other software and/or programs. The network communication module is used for realizing communication among components in the nonvolatile storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware.
By applying the technical scheme of the application, compared with the prior art, the method and the device can firstly process, calculate the image mask corresponding to the sample image, and generate the nodule segmentation model meeting the preset training standard based on the image mask and the BESNet network training, so that the nodule region in the target image can be accurately segmented based on the nodule segmentation model. Since the function of the BESNet network on boundary segmentation is obvious, but the boundary is not the most important part in nodule segmentation, some gold standard contours are not completely accurate, some nodules accompany with rear sound shadows, the determination of the contours of the nodules is very difficult, the actual significance of pursuing accurate segmentation of the nodules is not large, and the weight mask can strengthen the weight representing an approximate contour region, so that in the application, a more accurate nodule segmentation model can be obtained through combining the image mask and the BESNet network and fusion training, the nodule segmentation result can be more accurate, and the requirement of clinical diagnosis can be met.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A method for segmenting a nodule region in a medical image, comprising:
preprocessing a sample image;
calculating a weight mask corresponding to the preprocessed sample image;
training and generating a nodule segmentation model which accords with a preset training standard based on the weight mask and the BESNet network;
and segmenting the target image by using the nodule segmentation model to obtain a nodule segmentation result of the target image.
2. The method of claim 1, wherein the pre-processing the sample image comprises:
determining a closed curve corresponding to the sample image, wherein the closed curve comprises grid points corresponding to pixels to be segmented;
and initializing the distance value of each grid point to obtain an initial distance function value.
3. The method according to claim 2, wherein the calculating the weight mask corresponding to the preprocessed sample image comprises:
determining a target distance function value corresponding to each pixel point in the closed curve based on the initial distance function value by performing scanning processing in four directions on each grid point;
generating a weight mask corresponding to the sample image according to the target distance function value;
the determining, based on the initial distance function value, a target distance function value corresponding to each pixel point in the closed curve by performing scanning processing in four directions on each of the grid points, includes:
determining a target grid point of current scanning and a corresponding scanning direction;
comparing the target grid point with two adjacent grid points in the scanning reverse direction to determine a first distance function value corresponding to the target grid point;
and determining the minimum value of the first distance function values corresponding to the four directions as the target distance function value of the target grid point.
4. The method of claim 1, wherein the generating a nodule segmentation model that meets a preset training criterion based on the weight mask and BESNet network training comprises:
determining the BESNet network as a first split network;
masking the network weight of the first segmentation network through the weight mask to obtain a second segmentation network;
constructing a nodule segmentation model using the first segmentation network and the second segmentation network;
training the nodule segmentation model based on the sample images;
and if the loss function value of the nodule model is smaller than a preset threshold value, judging that the nodule segmentation model meets a preset training standard.
5. The method of claim 4, wherein the masking the network weights of the first segmentation network by the weight mask to obtain a second segmentation network comprises:
acquiring a first weight array corresponding to the first segmentation network, wherein the first weight array comprises first network weights corresponding to convolution kernels in the first segmentation network;
masking the first weight array through the weight mask to obtain a second weight array, wherein the weight mask comprises second network weights corresponding to the convolution kernels;
and generating a second segmentation network according to the second weight array.
6. The method of claim 5, wherein the masking the first weight array with the weight mask to obtain a second weight array, the weight mask including second network weights corresponding to the convolution kernels comprises:
performing binarization processing on the weight mask through a threshold function to generate a binarization mask, wherein the binarization mask is an array formed by 0 and 1, and the size of the binarization mask is the same as that of the first weight array;
multiplying the first weight array and the binarization mask point to obtain a second weight array;
said generating a second segmented network from said second array of weights comprises:
updating the network weight of each convolution kernel in the first segmentation network according to the second weight array;
and determining the first segmentation network after the network weight is updated as a second segmentation network.
7. The method of claim 4, wherein the training of the nodule segmentation model based on the sample images comprises:
first training the nodule segmentation model based on the second segmentation network and the sample images in a first training stage;
second training the nodule segmentation model based on the first segmentation network and the sample images in a second training phase after the first training phase;
if the loss function value of the nodule model is smaller than a preset threshold value, the nodule segmentation model is judged to accord with a preset training standard, and the method comprises the following steps:
calculating a first loss value for the first training phase based on a first loss function, and calculating a second loss value for the second training phase based on a second loss function;
when the first loss value is smaller than a first preset threshold value and the second loss value is smaller than a second preset threshold value, judging that the nodule segmentation model meets a preset training standard;
the segmenting of the target image by using the nodule segmentation model to obtain the nodule segmentation result of the target image comprises the following steps:
and inputting the target image into the nodule segmentation model conforming to the preset training standard, and obtaining a segmented target nodule region of the target image.
8. An apparatus for segmenting a nodule region in a medical image, comprising:
the processing module is used for preprocessing the sample image;
the calculation module is used for calculating a weight mask corresponding to the preprocessed sample image;
the training module is used for generating a nodule segmentation model which accords with a preset training standard based on the weight mask and the BESNet network training;
and the segmentation module is used for segmenting the target image by using the nodule segmentation model and acquiring a nodule segmentation result of the target image.
9. A non-transitory readable storage medium having stored thereon a computer program, which when executed by a processor implements the method of segmentation of a nodule region in a medical image of any one of claims 1 to 7.
10. A computer device comprising a non-transitory readable storage medium, a processor and a computer program stored on the non-transitory readable storage medium and executable on the processor, wherein the processor implements the segmentation method for a nodule region in a medical image according to any one of claims 1 to 7 when executing the program.
CN202011238342.9A 2020-11-09 2020-11-09 Segmentation method, device and equipment for nodule region in medical image Pending CN112330640A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011238342.9A CN112330640A (en) 2020-11-09 2020-11-09 Segmentation method, device and equipment for nodule region in medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011238342.9A CN112330640A (en) 2020-11-09 2020-11-09 Segmentation method, device and equipment for nodule region in medical image

Publications (1)

Publication Number Publication Date
CN112330640A true CN112330640A (en) 2021-02-05

Family

ID=74316615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011238342.9A Pending CN112330640A (en) 2020-11-09 2020-11-09 Segmentation method, device and equipment for nodule region in medical image

Country Status (1)

Country Link
CN (1) CN112330640A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592819A (en) * 2021-07-30 2021-11-02 上海皓桦科技股份有限公司 Image processing system and method
CN115272206A (en) * 2022-07-18 2022-11-01 深圳市医未医疗科技有限公司 Medical image processing method, medical image processing device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978893A (en) * 2019-03-26 2019-07-05 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of image, semantic segmentation network
CN110189332A (en) * 2019-05-22 2019-08-30 中南民族大学 Prostate Magnetic Resonance Image Segmentation method and system based on weight G- Design
WO2019237342A1 (en) * 2018-06-15 2019-12-19 富士通株式会社 Training method and apparatus for classification neural network for semantic segmentation, and electronic device
CN111127499A (en) * 2019-12-20 2020-05-08 北京工业大学 Security inspection image cutter detection segmentation method based on semantic contour information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237342A1 (en) * 2018-06-15 2019-12-19 富士通株式会社 Training method and apparatus for classification neural network for semantic segmentation, and electronic device
CN109978893A (en) * 2019-03-26 2019-07-05 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of image, semantic segmentation network
CN110189332A (en) * 2019-05-22 2019-08-30 中南民族大学 Prostate Magnetic Resonance Image Segmentation method and system based on weight G- Design
CN111127499A (en) * 2019-12-20 2020-05-08 北京工业大学 Security inspection image cutter detection segmentation method based on semantic contour information

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592819A (en) * 2021-07-30 2021-11-02 上海皓桦科技股份有限公司 Image processing system and method
CN115272206A (en) * 2022-07-18 2022-11-01 深圳市医未医疗科技有限公司 Medical image processing method, medical image processing device, computer equipment and storage medium
CN115272206B (en) * 2022-07-18 2023-07-04 深圳市医未医疗科技有限公司 Medical image processing method, medical image processing device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
Sori et al. DFD-Net: lung cancer detection from denoised CT scan image using deep learning
Whiteley et al. DirectPET: full-size neural network PET reconstruction from sinogram data
CN113516659B (en) Medical image automatic segmentation method based on deep learning
JP2022544229A (en) 3D Object Segmentation of Localized Medical Images Using Object Detection
CN113689342A (en) Method and system for optimizing image quality
Ko et al. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module
US11935246B2 (en) Systems and methods for image segmentation
CN112330640A (en) Segmentation method, device and equipment for nodule region in medical image
Wu et al. Vessel-GAN: Angiographic reconstructions from myocardial CT perfusion with explainable generative adversarial networks
CN113112559A (en) Ultrasonic image segmentation method and device, terminal equipment and storage medium
CN108038840B (en) Image processing method and device, image processing equipment and storage medium
Huang et al. Joint spine segmentation and noise removal from ultrasound volume projection images with selective feature sharing
Kanchanamala et al. Optimization-enabled hybrid deep learning for brain tumor detection and classification from MRI
Marhamati et al. LAIU-Net: A learning-to-augment incorporated robust U-Net for depressed humans’ tongue segmentation
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
CN113989231A (en) Method and device for determining kinetic parameters, computer equipment and storage medium
Alzahrani et al. Deep learning approach for breast ultrasound image segmentation
US11455755B2 (en) Methods and apparatus for neural network based image reconstruction
AU2019204365C1 (en) Method and System for Image Segmentation and Identification
CN110473297B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112767403A (en) Medical image segmentation model training method, medical image segmentation method and device
CN115439423B (en) CT image-based identification method, device, equipment and storage medium
CN116309806A (en) CSAI-Grid RCNN-based thyroid ultrasound image region of interest positioning method
KR102593539B1 (en) Apparauts, system, method and program for deciphering tomography image of common bile duct stone using artificial intelligence
Nour et al. Skin lesion segmentation based on edge attention vnet with balanced focal tversky loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination