CN115019049B - Bone imaging bone lesion segmentation method, system and equipment based on deep neural network - Google Patents
Bone imaging bone lesion segmentation method, system and equipment based on deep neural network Download PDFInfo
- Publication number
- CN115019049B CN115019049B CN202210941269.4A CN202210941269A CN115019049B CN 115019049 B CN115019049 B CN 115019049B CN 202210941269 A CN202210941269 A CN 202210941269A CN 115019049 B CN115019049 B CN 115019049B
- Authority
- CN
- China
- Prior art keywords
- bone
- neural network
- network
- segmentation result
- deep neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 145
- 230000011218 segmentation Effects 0.000 title claims abstract description 125
- 210000000988 bone and bone Anatomy 0.000 title claims abstract description 109
- 206010061728 Bone lesion Diseases 0.000 title claims abstract description 76
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000003384 imaging method Methods 0.000 title claims description 47
- 238000012800 visualization Methods 0.000 claims abstract description 43
- 238000007670 refining Methods 0.000 claims abstract description 22
- 230000003902 lesion Effects 0.000 claims abstract description 16
- 238000011156 evaluation Methods 0.000 claims abstract description 12
- 230000008569 process Effects 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 42
- 230000006870 function Effects 0.000 claims description 24
- 210000002569 neuron Anatomy 0.000 claims description 15
- 230000014461 bone development Effects 0.000 claims description 14
- 238000002372 labelling Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 208000020084 Bone disease Diseases 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 7
- 238000013434 data augmentation Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 230000035945 sensitivity Effects 0.000 claims description 5
- 241000287196 Asthenes Species 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000009792 diffusion process Methods 0.000 abstract description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000009206 nuclear medicine Methods 0.000 description 4
- 206010027476 Metastases Diseases 0.000 description 3
- 206010027452 Metastases to bone Diseases 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000009401 metastasis Effects 0.000 description 3
- 238000002603 single-photon emission computed tomography Methods 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 238000007469 bone scintigraphy Methods 0.000 description 2
- 238000012797 qualification Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 206010006187 Breast cancer Diseases 0.000 description 1
- 208000026310 Breast neoplasm Diseases 0.000 description 1
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 206010060862 Prostate cancer Diseases 0.000 description 1
- 208000000236 Prostatic Neoplasms Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- IUPNVOAUFBLQME-SGNQUONSSA-L dioxidanium;dioxido-oxo-(phosphonatomethyl)-$l^{5}-phosphane;technetium-99(4+) Chemical compound [OH3+].[OH3+].[99Tc+4].[O-]P([O-])(=O)CP([O-])([O-])=O IUPNVOAUFBLQME-SGNQUONSSA-L 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 210000002307 prostate Anatomy 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 230000009885 systemic effect Effects 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/033—Recognition of patterns in medical or anatomical images of skeletal patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a bone visualization bone lesion segmentation method, a bone visualization bone lesion segmentation system and bone visualization bone lesion segmentation equipment based on a deep neural network, and aims to solve the problem of high label noise caused by the self-diffusion characteristic of bone visualization in the process of segmenting a bone lesion in bone visualization in the prior art. A cascade network model comprising a refining network and two neural networks is constructed, DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result are calculated in the refining network, the bone lesion segmentation result with the DSC index higher than a threshold value is defined as a reliable lesion segmentation result of a first stage, lesions which are difficult to distinguish in the first neural network and the second neural network are extracted, and the refined segmentation result is output. According to the method, the consistency of the outputs of the two neural networks is calculated, and the DSC is used as the consistency evaluation index for each focus segmentation result, so that the information in a noise sample can be effectively utilized, and the influence of label noise caused by the diffusion characteristic of bone visualization can be reduced.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method for carrying out focus segmentation on a bone image based on a neural network, which relates to the segmentation processing of a bone image focus.
Background
Advanced malignancies, such as breast, prostate, and lung cancer, often develop bone metastases. Therefore, early detection of bone metastases is of great importance for the selection of therapeutic strategies, to obtain better overall survival and to improve the quality of life of patients. Recent national survey reports show that china performs more than 115 million bone scans per year, which brings enormous workload to nuclear medicine physicians.
In the prior art, bone scintigraphy based on 99mTc-MDP is one of the most commonly used techniques for diagnosing bone metastases in cancer patients, because it has the advantages of systemic detection and high sensitivity. For the positioning of the skeleton of the whole body bone imaging, three kinds of examination are commonly adopted: MRI examination, CT examination and SPECT whole body bone imaging examination. SPECT has good sensitivity and comprehensiveness, and is a main examination means in the field of bone metastasis diagnosis in China at present. Only the most common SPECT whole-body bone imaging exam is referred to here, i.e. the bone lesion location and nature is reported by the medical professional for the image data of each exam. At present, in some hospitals with better medical conditions, a general method is to collect two front and back whole body imaging images by using professional equipment, and after the whole body bone imaging image data collection is completed, a professional radiological technician reads and positions the two front and back images.
However, it is also difficult to diagnose bone metastasis, mainly in the following aspects: firstly, the resolution of the bone imaging image is limited, so that the interpretation work depends on experience, and the defects of subjectivity, obvious error, low efficiency and the like exist; secondly, doctors need to extract, locate and characterize the bone foci, the process not only consumes time, but also puts high requirements on the qualification of operating doctors, and in an actual clinical scene, the mechanical repeated process is the main labor consumption. In addition, due to the imaging characteristics of bone imaging, the bone focus is easy to miss detection under long-time mechanical work.
In summary, an algorithm capable of automatically segmenting bone lesions is very necessary for clinical work of bone metastasis detection, and can be identified only by inputting bone imaging into the algorithm, so that the bone lesions in the bone imaging can be automatically segmented, and doctors can be helped to diagnose more quickly and better. In addition, as one of nuclear medicine images, a bone image (i.e., a bone visualization) is characterized in that the visualization imaging itself is dispersive, and a good segmentation label is difficult to draw, so that the label in training data contains a large noise component, and the accuracy and the effect of a bone disease focus in the subsequent automatic bone visualization segmentation by using an algorithm are influenced.
Disclosure of Invention
The invention aims to: the invention provides a bone visualization bone lesion segmentation method based on a deep neural network, aiming at solving the problem of high label noise caused by the self-diffusion characteristic of bone visualization in the prior art when the bone lesion in the bone visualization is segmented.
The invention specifically adopts the following technical scheme for realizing the purpose:
a bone imaging bone lesion segmentation method based on a deep neural network comprises the following steps:
s1, acquiring a bone development image, and labeling the bone development image;
s2, constructing a cascade network model based on a deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first bone lesion segmentation result is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result of a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
the DSC index is calculated by the following formula:
wherein,and withRespectively representing the number of positive sample pixels and the number of real positive sample pixels predicted by a cascade network model based on a deep neural network;
s3, training a cascade network model based on a deep neural network;
training the cascade network model based on the deep neural network constructed in the step S2 by using the bone development image obtained in the step S1 and the marked bone development image;
s4, dividing a bone disease focus;
and segmenting the femoral lesion in the bone image of the object to be detected by utilizing the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
In step S1, carrying out data augmentation processing on the obtained bone imaging image, wherein the data augmentation processing mode comprises random overturning, translation and rotation;
the bone visualization image after data augmentation and the bone visualization image after labeling are divided into a training set and a testing set according to 4.
In step S3, the training of the cascaded network model based on the deep neural network includes:
step S31, forward calculation;
to one toFeedforward neural network of layers, with its training sample set asWhereinis a dimension of a single sample and is,represents the number of training samples, thenA sample can be represented as(ii) a Is provided with the firstFirst of the layerA neuron toFirst of the layerThe weight of each neuron connection is recorded asThen, firstIs laminated toConnection weight matrix of layersIs provided withThe activation function of neurons on the layer isFrom the input layer to the output layer, forward calculation is continuously performed, and the process is as follows:
wherein,denotes the firstLayer neuron activation value, then, the network output layer neuron activation value is:
step S32, updating the weight value;
neural networks use cross entropy as an objective function for classification or segmentation tasks, which is defined as follows:
wherein,and withRespectively representing the output and the label of the last layer of the network; the neural network can continuously reduce the value of the target function by solving the gradient of the target function J to the weight and iterating and adopting a gradient reduction algorithm, thereby finding a group of proper weights; the gradient descent algorithm is as follows:
wherein α represents a learning rate constant;
step S33: and (3) testing a model: after the deep neural network model training is completed, the identification effect of the model on the test set is quantitatively evaluated through evaluation indexes, wherein the evaluation indexes comprise TPVF, PPV, DSC and JSC, and are defined as follows:
wherein V S And V G Respectively representing the pixel number of the positive sample predicted by the cascade network model based on the deep neural network and the pixel number of the real positive sample.
In step S33, the evaluation index further includes sensitivity, F-1 score, precision, which is defined as follows:
In step S3, when training the cascaded network model based on the deep neural network, the loss function is:
where M is a mask used to filter out low confidence noise labels,is a hyper-parameter used to balance the ratio,;L 1 、L 2 is the cross entropy loss function, L, of the first and second neural networks, respectively r Is a cross entropy loss function of the refined network, and the cross entropy loss function is defined as:
wherein,is the number of samples to be taken,is the number of the categories that the user is in,is a label to be attached to the body,is the predicted probability of the network output.
A deep neural network based bone visualization bone lesion segmentation system, comprising:
the image acquisition and labeling module is used for acquiring a bone development image and labeling the bone development image;
the network model building module is used for building a cascade network model based on the deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are parallel arranged, independent and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first segmentation result of the bone lesion is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the bone disease focus first segmentation result and the bone disease focus second segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result in a first stage; based on the input reliable focus segmentation result and bone visualization image in the first stage, extracting the focus which is difficult to distinguish in the first neural network and the second neural network, and outputting a refined segmentation result;
the DSC index is calculated by the following formula:
wherein,andrespectively representing the number of positive sample pixels and the number of real positive sample pixels predicted by a cascade network model based on a deep neural network;
the network model training module is used for training a cascade network model based on the deep neural network;
training the cascaded network model based on the deep neural network, which is constructed by the network model construction module, by using the bone visualization image and the labeled bone visualization image which are acquired by the network model construction module;
the focus segmentation module is used for segmenting the bone focus;
and segmenting the femoral lesion in the bone image of the object to be detected by utilizing the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
A computer device comprising a memory storing a computer program and a processor implementing the steps of a deep neural network based bone visualization bone lesion segmentation method as described above when the processor executes the computer program.
The invention has the following beneficial effects:
1. the invention provides a cascade network model based on a deep neural network, which can realize automatic segmentation of bone lesions by training the network model, namely, a bone lesion segmentation result can be obtained by inputting a whole body bone imaging image, extraction, positioning and qualification of femoral lesions by doctors are not needed, the artificial participation degree is lower, artificial intelligence is higher, the segmentation efficiency when the bone lesions are segmented in the bone imaging is greatly improved, and the segmentation effect is improved.
2. In the invention, a double-path architecture is adopted, namely two independent neural networks without shared weight are adopted, and the segmentation result output by the two neural networks in parallel replaces the single segmentation result, so that the stability of a network model can be enhanced at one time; the original mode of directly combining and obtaining results through voting is replaced by a cascading training mode, the output of the two neural networks in parallel and the original input are combined and input into a refining network on a channel, and the refined segmentation result is obtained, so that the problem of high label noise caused by the dispersion characteristic of the bone visualization can be solved.
3. Based on the diffusion characteristic of the bone image, the input of the neural network is high-noise input, and the output obtained based on the high-noise input is unfavorable for training the network model, and the data discarding will cause waste (after all, the collection and labeling of the medical images take a lot of time); therefore, the consistency of the outputs of the two neural networks needs to be calculated, the DSC is used as a consistency evaluation index for each output focus segmentation result, the noise samples are weighted in the mode, only the part with high confidence is used for training, the information in the noise samples can be effectively utilized, and the influence of label noise caused by the diffusion characteristic of the bone visualization is reduced.
Drawings
FIG. 1 is a schematic diagram of the present invention.
Detailed Description
Example 1
The embodiment provides a bone imaging bone lesion segmentation method based on a deep neural network, which is characterized in that a network model is built and trained, and the trained network model can realize automatic segmentation of a bone lesion. As shown in fig. 1, the segmentation method specifically includes the following steps:
the method comprises the following steps of S1, obtaining a bone visualization image, and labeling the bone visualization image;
the network model needs to learn a large amount of data and data labels to construct a reliable mapping from the whole body bone visualization to the bone focus segmentation result.
Acquiring a bone imaging image, namely acquiring a whole-body bone imaging image which is operated and imaged by a professional nuclear medicine technician directly from a hospital system;
marking the bone imaging image, sketching the bone focus region by a low-age physician through 3D scanner, and then auditing, correcting and generating the result by a high-age physician;
aiming at the obtained bone imaging image, data processing is needed, and the processing mode is mainly data augmentation, namely, normalization operation is carried out on the input bone imaging image according to a window width and window level, random overturning, translation and rotation are carried out, and the diversity of training samples is expanded, so that the model learns the characteristics with stronger robustness, and the over-fitting phenomenon of the model is relieved;
to train and test the neural network, the processed data sets are classified into training sets and test sets according to 4.
S2, constructing a cascade network model based on a deep neural network;
the processing object of this application is nuclear medicine image, and the bone video picture, based on the diffuse characteristic of bone video picture itself, very good segmentation label is hardly drawn, and training data contains the noise composition great. Therefore, the method and the device creatively provide and construct a cascade network model based on a dual-path and a cascade architecture based on a deep neural network;
the cascaded network model based on the deep neural network comprises a first neural network, a second neural network and a refined network, key points of the network model are not in the feature extraction capability of the first neural network, the second neural network and the refined network, namely any existing advanced segmentation network model is adopted as a framework for feature extraction, for example, the first neural network, the second neural network and the refined network can adopt a Unet network model for carrying out the segmentation of the whole body bone visualization focus; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone imaging image is simultaneously used as the input of the two neural networks, namely the bone imaging image is used as the input of the first neural network, and a first bone focus segmentation result is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are used as input of a refining network, the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are combined and input into the refining network on a channel, and finally the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result of a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
in the refining network, DSC is used as an evaluation index of consistency for each bone lesion segmentation result output by the first neural network and the second neural network, and DSC values which are higher than a threshold value are reserved and values which are lower than the threshold value are discarded; in this way, the loss calculated for the noise samples is weighted and trained using only the high confidence components, which allows for efficient use of the information in the noise samples.
The DSC index is calculated by the following formula:
wherein,and withRespectively representing the number of positive sample pixels predicted by the cascaded network model based on the deep neural network and the number of real positive sample pixels.
S3, training a cascade network model based on a deep neural network;
training the cascade network model based on the deep neural network constructed in the step S2 by using the bone development image obtained in the step S1 and the marked bone development image;
in training the cascaded network model, the training comprises:
step S31, forward calculation;
to one toThe feedforward neural network of a layer is set as a training sample setWherein, in the process,is a dimension of a single sample and is,represents the number of training samples, thenA sample can be represented as(ii) a Is provided with the firstFirst of a layerA neuron toFirst of a layerThe weight of each neuron connection is recorded asThen, firstIs laminated toConnection weight matrix of layersIs provided withThe activation function of neurons on the layer isFrom the input layer to the output layer, forward calculation is continuously performed, and the process is as follows:
wherein,is shown asLayer neuron activation value, then, the network output layer neuron activation value is:
step S32, updating the weight value;
neural networks use cross entropy as an objective function for classification or segmentation tasks, which is defined as follows:
wherein,andrespectively representing the output and the label of the last layer of the network; the neural network can continuously reduce the value of the target function by solving the gradient of the target function J to the weight and iterating and adopting a gradient reduction algorithm, thereby finding a group of proper weights; the gradient descent algorithm is as follows:
wherein α represents a learning rate constant;
step S33: and (3) testing a model: after the deep neural network model training is completed, the evaluation indexes quantitatively evaluate the recognition effect of the model on the test set, wherein the evaluation indexes comprise TPVF, PPV, DSC and JSC, and are defined as follows:
wherein V S And V G Respectively representing the number of positive sample pixels and the number of real positive sample pixels predicted by a cascade network model based on a deep neural network;
in step S33, the evaluation index further includes sensitivity, F-1 score, precision, which is defined as follows:
In step S3, when training the cascaded network model based on the deep neural network, the loss function is:
where M is a mask used to filter out low confidence noise labels,is a hyper-parameter used to balance the ratio,;L 1 、L 2 is the cross entropy loss function, L, of the first and second neural networks, respectively r Is a cross entropy loss function of the refined network, and the cross entropy loss function is defined as:
wherein,is the number of samples to be taken,is the number of the categories that the user is in,is a label to be attached to the body,is the predicted probability of the network output.
S4, dividing a bone disease focus;
and segmenting the femoral lesion in the bone image of the object to be detected by using the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
Example 2
The embodiment provides a bone imaging bone lesion segmentation system based on a deep neural network, which can realize automatic segmentation of a bone lesion by building a network model and training the network model after training. The segmentation system comprises:
the image acquisition and labeling module is used for acquiring a bone visualization image and labeling the bone visualization image;
the network model construction module is used for constructing a cascade network model based on a deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first bone lesion segmentation result is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the bone disease focus first segmentation result and the bone disease focus second segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result in a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
the DSC index is calculated by the formula:
wherein,and withRespectively representing the number of positive sample pixels and the number of real positive sample pixels predicted by a cascade network model based on a deep neural network;
the network model training module is used for training a cascade network model based on a deep neural network;
training the cascaded network model based on the deep neural network, which is constructed by the network model construction module, by using the bone visualization image acquired by the network model construction module and the labeled bone visualization image;
the focus segmentation module is used for segmenting the bone focus;
and segmenting the femoral lesion in the bone image of the object to be detected by using the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
Example 3
The present embodiment provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the bone visualization bone lesion segmentation method based on the deep neural network according to embodiment 1 when executing the computer program.
Example 4
The present embodiment provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the method for bone imaging lesion segmentation based on deep neural network according to embodiment 1.
Claims (7)
1. A bone visualization bone lesion segmentation method based on a deep neural network is characterized by comprising the following steps:
s1, acquiring a bone development image, and labeling the bone development image;
s2, constructing a cascade network model based on a deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first bone lesion segmentation result is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result of a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
the DSC index is calculated by the formula:
wherein,andrespectively representing the pixel number of the positive sample predicted by the cascade network model based on the deep neural network and the pixel number of the real positive sample;
s3, training a cascade network model based on a deep neural network;
training the cascade network model based on the deep neural network constructed in the step S2 by using the bone development image obtained in the step S1 and the marked bone development image;
s4, dividing a bone disease focus;
and segmenting the femoral lesion in the bone image of the object to be detected by utilizing the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
2. The method for bone imaging bone lesion segmentation based on the deep neural network as claimed in claim 1, wherein: in the step S1, data augmentation processing is carried out on the obtained bone imaging image, wherein the data augmentation processing mode comprises random overturning, translation and rotation;
the bone visualization image after data augmentation and the bone visualization image after labeling are divided into a training set and a testing set according to 4.
3. The method for bone imaging bone lesion segmentation based on deep neural network as claimed in claim 1, wherein the training of the cascaded network model based on deep neural network in step S3 comprises:
step S31, forward calculation;
to one toThe feedforward neural network of a layer is set as a training sample setWhereinis a single sampleThe dimension of the device is that the device,represents the number of training samples, thenA sample can be represented as(ii) a Is provided with the firstFirst of a layerA neuron toFirst of a layerThe weight of each neuron connection is recorded asThen, a firstIs laminated toConnection weight matrix of layersIs provided withThe activation function of neurons on the layer isFrom the input layer to the output layer, forward calculation is continuously performed, and the process is as follows:
wherein,is shown asLayer neuron activation value, then, the network output layer neuron activation value is:
step S32, updating the weight value;
the neural network adopts cross entropy as an objective function of a classification or segmentation task, which is defined as follows:
wherein,andrespectively representing the output and the label of the last layer of the network; the neural network can continuously reduce the value of the target function by solving the gradient of the target function J to the weight and iterating and adopting a gradient descent algorithm, so as to find a group of proper weights; the gradient descent algorithm is as follows:
wherein α represents a learning rate constant;
step S33: and (3) testing a model: after the deep neural network model training is completed, the evaluation indexes quantitatively evaluate the recognition effect of the model on the test set, wherein the evaluation indexes comprise TPVF, PPV, DSC and JSC, and are defined as follows:
wherein V S And V G Respectively representing the pixel number of the positive sample predicted by the cascade network model based on the deep neural network and the pixel number of the real positive sample.
5. The method as claimed in claim 1, wherein in step S3, when the cascade network model based on the deep neural network is trained, the loss function is:
where M is a mask used to filter out low confidence noise labels,is a hyper-parameter used to balance the ratio,;L 1 、L 2 is the cross entropy loss function, L, of the first and second neural networks, respectively r Is a cross entropy loss function of the refined network, and the cross entropy loss function is defined as:
6. A bone visualization bone lesion segmentation system based on a deep neural network, comprising:
the image acquisition and labeling module is used for acquiring a bone development image and labeling the bone development image;
the network model building module is used for building a cascade network model based on the deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first bone lesion segmentation result is output; the bone visualization image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result of a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
the DSC index is calculated by the formula:
wherein,and withRespectively representing the pixel number of the positive sample predicted by the cascade network model based on the deep neural network and the pixel number of the real positive sample;
the network model training module is used for training a cascade network model based on a deep neural network;
training the cascaded network model based on the deep neural network, which is constructed by the network model construction module, by using the bone visualization image and the labeled bone visualization image which are acquired by the network model construction module;
the focus segmentation module is used for segmenting bone focuses;
and segmenting the femoral lesion in the bone image of the object to be detected by utilizing the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
7. A computer device comprising a memory storing a computer program and a processor, wherein the processor when executing the computer program performs the steps of a method for bone visualization bone lesion segmentation based on deep neural networks as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210941269.4A CN115019049B (en) | 2022-08-08 | 2022-08-08 | Bone imaging bone lesion segmentation method, system and equipment based on deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210941269.4A CN115019049B (en) | 2022-08-08 | 2022-08-08 | Bone imaging bone lesion segmentation method, system and equipment based on deep neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115019049A CN115019049A (en) | 2022-09-06 |
CN115019049B true CN115019049B (en) | 2022-12-16 |
Family
ID=83066074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210941269.4A Active CN115019049B (en) | 2022-08-08 | 2022-08-08 | Bone imaging bone lesion segmentation method, system and equipment based on deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115019049B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115261963A (en) * | 2022-09-27 | 2022-11-01 | 南通如东依航电子研发有限公司 | Method for improving deep plating capability of PCB |
CN116188469A (en) * | 2023-04-28 | 2023-05-30 | 之江实验室 | Focus detection method, focus detection device, readable storage medium and electronic equipment |
CN117036376B (en) * | 2023-10-10 | 2024-01-30 | 四川大学 | Lesion image segmentation method and device based on artificial intelligence and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035263A (en) * | 2018-08-14 | 2018-12-18 | 电子科技大学 | Brain tumor image automatic segmentation method based on convolutional neural networks |
CN110555856A (en) * | 2019-09-09 | 2019-12-10 | 成都智能迭迦科技合伙企业(有限合伙) | Macular edema lesion area segmentation method based on deep neural network |
CN112102339A (en) * | 2020-09-21 | 2020-12-18 | 四川大学 | Whole-body bone imaging bone segmentation method based on atlas registration |
CN112308853A (en) * | 2020-10-20 | 2021-02-02 | 平安科技(深圳)有限公司 | Electronic equipment, medical image index generation method and device and storage medium |
CN114037072A (en) * | 2021-10-11 | 2022-02-11 | 浙江大华技术股份有限公司 | Neural network optimization method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11568627B2 (en) * | 2015-11-18 | 2023-01-31 | Adobe Inc. | Utilizing interactive deep learning to select objects in digital visual media |
-
2022
- 2022-08-08 CN CN202210941269.4A patent/CN115019049B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035263A (en) * | 2018-08-14 | 2018-12-18 | 电子科技大学 | Brain tumor image automatic segmentation method based on convolutional neural networks |
CN110555856A (en) * | 2019-09-09 | 2019-12-10 | 成都智能迭迦科技合伙企业(有限合伙) | Macular edema lesion area segmentation method based on deep neural network |
CN112102339A (en) * | 2020-09-21 | 2020-12-18 | 四川大学 | Whole-body bone imaging bone segmentation method based on atlas registration |
CN112308853A (en) * | 2020-10-20 | 2021-02-02 | 平安科技(深圳)有限公司 | Electronic equipment, medical image index generation method and device and storage medium |
CN114037072A (en) * | 2021-10-11 | 2022-02-11 | 浙江大华技术股份有限公司 | Neural network optimization method and device |
Non-Patent Citations (6)
Title |
---|
Harnessing 2D Networks and 3D Features for Automated Pancreas Segmentation from Volumetric CT Images;huanchen等;《Medical Image Computing and Computer Assisted Intervention – MICCAI 2019》;20191010;339–347 * |
一种高精度、低开销的单包溯源方法;鲁宁等;《软件学报》;20171015(第10期);217-236 * |
基于判别关键域和深度学习的植物图像分类;张雪芹等;《计算机工程与设计》;20200316(第03期);150-156 * |
基于改进深度学习的乳腺癌医学影像检测方法;陈彤;《现代计算机》;20200515(第14期);36-40+45 * |
基于深度学习的图像语义分割研究与应用;彭超;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20190215(第2期);I138-1556 * |
基于高斯混合模型低对比度SPECT图像分割算法的研究;张战胜;《中国医学工程》;20211125;第29卷(第11期);9-12 * |
Also Published As
Publication number | Publication date |
---|---|
CN115019049A (en) | 2022-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115019049B (en) | Bone imaging bone lesion segmentation method, system and equipment based on deep neural network | |
US11101033B2 (en) | Medical image aided diagnosis method and system combining image recognition and report editing | |
CN108898595B (en) | Construction method and application of positioning model of focus region in chest image | |
CN108464840B (en) | Automatic detection method and system for breast lumps | |
CN109544526B (en) | Image recognition system, device and method for chronic atrophic gastritis | |
CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
CN110288597B (en) | Attention mechanism-based wireless capsule endoscope video saliency detection method | |
CN109523535B (en) | Pretreatment method of lesion image | |
CN118674677A (en) | Construction method and application of gastric cancer image recognition model | |
CN109858540B (en) | Medical image recognition system and method based on multi-mode fusion | |
CN108257135A (en) | The assistant diagnosis system of medical image features is understood based on deep learning method | |
CN106033540B (en) | A kind of microecology in vaginas morphology automatic analysis method and system | |
CN115345819A (en) | Gastric cancer image recognition system, device and application thereof | |
CN114782307A (en) | Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning | |
Zhang et al. | A semi-supervised learning approach for COVID-19 detection from chest CT scans | |
Kumaraswamy et al. | A review on cancer detection strategies with help of biomedical images using machine learning techniques | |
Makarovskikh et al. | Automatic classification Infectious disease X-ray images based on Deep learning Algorithms | |
CN112419246B (en) | Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution | |
CN110782441A (en) | DR image pulmonary tuberculosis intelligent segmentation and detection method based on deep learning | |
Li et al. | MVDI25K: A large-scale dataset of microscopic vaginal discharge images | |
Tang et al. | CNN-based automatic detection of bone conditions via diagnostic CT images for osteoporosis screening | |
Vasconcelos et al. | A new risk assessment methodology for dermoscopic skin lesion images | |
Wan et al. | Recognition of Cheating Behavior in Examination Room Based on Deep Learning | |
Pant et al. | Disease classification of chest X-ray using CNN | |
Xiong et al. | Deep Ensemble Learning Network for Kidney Lesion Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |