CN115019049B - Bone imaging bone lesion segmentation method, system and equipment based on deep neural network - Google Patents

Bone imaging bone lesion segmentation method, system and equipment based on deep neural network Download PDF

Info

Publication number
CN115019049B
CN115019049B CN202210941269.4A CN202210941269A CN115019049B CN 115019049 B CN115019049 B CN 115019049B CN 202210941269 A CN202210941269 A CN 202210941269A CN 115019049 B CN115019049 B CN 115019049B
Authority
CN
China
Prior art keywords
bone
neural network
network
segmentation result
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210941269.4A
Other languages
Chinese (zh)
Other versions
CN115019049A (en
Inventor
章毅
李林
皮勇
蔡华伟
魏建安
赵祯
蒋丽莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202210941269.4A priority Critical patent/CN115019049B/en
Publication of CN115019049A publication Critical patent/CN115019049A/en
Application granted granted Critical
Publication of CN115019049B publication Critical patent/CN115019049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a bone visualization bone lesion segmentation method, a bone visualization bone lesion segmentation system and bone visualization bone lesion segmentation equipment based on a deep neural network, and aims to solve the problem of high label noise caused by the self-diffusion characteristic of bone visualization in the process of segmenting a bone lesion in bone visualization in the prior art. A cascade network model comprising a refining network and two neural networks is constructed, DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result are calculated in the refining network, the bone lesion segmentation result with the DSC index higher than a threshold value is defined as a reliable lesion segmentation result of a first stage, lesions which are difficult to distinguish in the first neural network and the second neural network are extracted, and the refined segmentation result is output. According to the method, the consistency of the outputs of the two neural networks is calculated, and the DSC is used as the consistency evaluation index for each focus segmentation result, so that the information in a noise sample can be effectively utilized, and the influence of label noise caused by the diffusion characteristic of bone visualization can be reduced.

Description

Bone imaging bone lesion segmentation method, system and equipment based on deep neural network
Technical Field
The invention relates to the technical field of image processing, in particular to a method for carrying out focus segmentation on a bone image based on a neural network, which relates to the segmentation processing of a bone image focus.
Background
Advanced malignancies, such as breast, prostate, and lung cancer, often develop bone metastases. Therefore, early detection of bone metastases is of great importance for the selection of therapeutic strategies, to obtain better overall survival and to improve the quality of life of patients. Recent national survey reports show that china performs more than 115 million bone scans per year, which brings enormous workload to nuclear medicine physicians.
In the prior art, bone scintigraphy based on 99mTc-MDP is one of the most commonly used techniques for diagnosing bone metastases in cancer patients, because it has the advantages of systemic detection and high sensitivity. For the positioning of the skeleton of the whole body bone imaging, three kinds of examination are commonly adopted: MRI examination, CT examination and SPECT whole body bone imaging examination. SPECT has good sensitivity and comprehensiveness, and is a main examination means in the field of bone metastasis diagnosis in China at present. Only the most common SPECT whole-body bone imaging exam is referred to here, i.e. the bone lesion location and nature is reported by the medical professional for the image data of each exam. At present, in some hospitals with better medical conditions, a general method is to collect two front and back whole body imaging images by using professional equipment, and after the whole body bone imaging image data collection is completed, a professional radiological technician reads and positions the two front and back images.
However, it is also difficult to diagnose bone metastasis, mainly in the following aspects: firstly, the resolution of the bone imaging image is limited, so that the interpretation work depends on experience, and the defects of subjectivity, obvious error, low efficiency and the like exist; secondly, doctors need to extract, locate and characterize the bone foci, the process not only consumes time, but also puts high requirements on the qualification of operating doctors, and in an actual clinical scene, the mechanical repeated process is the main labor consumption. In addition, due to the imaging characteristics of bone imaging, the bone focus is easy to miss detection under long-time mechanical work.
In summary, an algorithm capable of automatically segmenting bone lesions is very necessary for clinical work of bone metastasis detection, and can be identified only by inputting bone imaging into the algorithm, so that the bone lesions in the bone imaging can be automatically segmented, and doctors can be helped to diagnose more quickly and better. In addition, as one of nuclear medicine images, a bone image (i.e., a bone visualization) is characterized in that the visualization imaging itself is dispersive, and a good segmentation label is difficult to draw, so that the label in training data contains a large noise component, and the accuracy and the effect of a bone disease focus in the subsequent automatic bone visualization segmentation by using an algorithm are influenced.
Disclosure of Invention
The invention aims to: the invention provides a bone visualization bone lesion segmentation method based on a deep neural network, aiming at solving the problem of high label noise caused by the self-diffusion characteristic of bone visualization in the prior art when the bone lesion in the bone visualization is segmented.
The invention specifically adopts the following technical scheme for realizing the purpose:
a bone imaging bone lesion segmentation method based on a deep neural network comprises the following steps:
s1, acquiring a bone development image, and labeling the bone development image;
s2, constructing a cascade network model based on a deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first bone lesion segmentation result is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result of a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
the DSC index is calculated by the following formula:
Figure 499920DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 948219DEST_PATH_IMAGE002
and with
Figure 9847DEST_PATH_IMAGE003
Respectively representing the number of positive sample pixels and the number of real positive sample pixels predicted by a cascade network model based on a deep neural network;
s3, training a cascade network model based on a deep neural network;
training the cascade network model based on the deep neural network constructed in the step S2 by using the bone development image obtained in the step S1 and the marked bone development image;
s4, dividing a bone disease focus;
and segmenting the femoral lesion in the bone image of the object to be detected by utilizing the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
In step S1, carrying out data augmentation processing on the obtained bone imaging image, wherein the data augmentation processing mode comprises random overturning, translation and rotation;
the bone visualization image after data augmentation and the bone visualization image after labeling are divided into a training set and a testing set according to 4.
In step S3, the training of the cascaded network model based on the deep neural network includes:
step S31, forward calculation;
to one to
Figure 732952DEST_PATH_IMAGE004
Feedforward neural network of layers, with its training sample set as
Figure 812904DEST_PATH_IMAGE005
Wherein, in the step (A),
Figure 212530DEST_PATH_IMAGE006
is a dimension of a single sample and is,
Figure 10722DEST_PATH_IMAGE007
represents the number of training samples, then
Figure 22671DEST_PATH_IMAGE008
A sample can be represented as
Figure 222708DEST_PATH_IMAGE009
(ii) a Is provided with the first
Figure 950493DEST_PATH_IMAGE010
First of the layer
Figure 235981DEST_PATH_IMAGE011
A neuron to
Figure 504151DEST_PATH_IMAGE012
First of the layer
Figure 604700DEST_PATH_IMAGE013
The weight of each neuron connection is recorded as
Figure 300124DEST_PATH_IMAGE014
Then, first
Figure 354798DEST_PATH_IMAGE015
Is laminated to
Figure 161080DEST_PATH_IMAGE012
Connection weight matrix of layers
Figure 335710DEST_PATH_IMAGE016
Is provided with
Figure 202035DEST_PATH_IMAGE010
The activation function of neurons on the layer is
Figure 976961DEST_PATH_IMAGE017
From the input layer to the output layer, forward calculation is continuously performed, and the process is as follows:
Figure 586934DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 616070DEST_PATH_IMAGE019
denotes the first
Figure 918876DEST_PATH_IMAGE010
Layer neuron activation value, then, the network output layer neuron activation value is:
Figure 151405DEST_PATH_IMAGE020
network output of the last layer is a design performance function
Figure 299489DEST_PATH_IMAGE021
Designing a performance function:
Figure 979870DEST_PATH_IMAGE022
step S32, updating the weight value;
neural networks use cross entropy as an objective function for classification or segmentation tasks, which is defined as follows:
Figure 437265DEST_PATH_IMAGE023
wherein the content of the first and second substances,
Figure 203095DEST_PATH_IMAGE024
and with
Figure 154871DEST_PATH_IMAGE025
Respectively representing the output and the label of the last layer of the network; the neural network can continuously reduce the value of the target function by solving the gradient of the target function J to the weight and iterating and adopting a gradient reduction algorithm, thereby finding a group of proper weights; the gradient descent algorithm is as follows:
Figure 909331DEST_PATH_IMAGE026
wherein α represents a learning rate constant;
step S33: and (3) testing a model: after the deep neural network model training is completed, the identification effect of the model on the test set is quantitatively evaluated through evaluation indexes, wherein the evaluation indexes comprise TPVF, PPV, DSC and JSC, and are defined as follows:
Figure 553939DEST_PATH_IMAGE027
wherein V S And V G Respectively representing the pixel number of the positive sample predicted by the cascade network model based on the deep neural network and the pixel number of the real positive sample.
In step S33, the evaluation index further includes sensitivity, F-1 score, precision, which is defined as follows:
Figure 10328DEST_PATH_IMAGE028
wherein
Figure 969057DEST_PATH_IMAGE029
Figure 607718DEST_PATH_IMAGE030
Figure 954385DEST_PATH_IMAGE031
Figure 179962DEST_PATH_IMAGE032
Respectively represent the number of true yang, false yin and true yin.
In step S3, when training the cascaded network model based on the deep neural network, the loss function is:
Figure 739119DEST_PATH_IMAGE033
where M is a mask used to filter out low confidence noise labels,
Figure 451860DEST_PATH_IMAGE034
is a hyper-parameter used to balance the ratio,
Figure 375954DEST_PATH_IMAGE035
;L 1 、L 2 is the cross entropy loss function, L, of the first and second neural networks, respectively r Is a cross entropy loss function of the refined network, and the cross entropy loss function is defined as:
Figure 806935DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 153471DEST_PATH_IMAGE037
is the number of samples to be taken,
Figure 986298DEST_PATH_IMAGE038
is the number of the categories that the user is in,
Figure 143610DEST_PATH_IMAGE039
is a label to be attached to the body,
Figure 61888DEST_PATH_IMAGE040
is the predicted probability of the network output.
A deep neural network based bone visualization bone lesion segmentation system, comprising:
the image acquisition and labeling module is used for acquiring a bone development image and labeling the bone development image;
the network model building module is used for building a cascade network model based on the deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are parallel arranged, independent and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first segmentation result of the bone lesion is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the bone disease focus first segmentation result and the bone disease focus second segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result in a first stage; based on the input reliable focus segmentation result and bone visualization image in the first stage, extracting the focus which is difficult to distinguish in the first neural network and the second neural network, and outputting a refined segmentation result;
the DSC index is calculated by the following formula:
Figure 713580DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 869755DEST_PATH_IMAGE002
and
Figure 994706DEST_PATH_IMAGE003
respectively representing the number of positive sample pixels and the number of real positive sample pixels predicted by a cascade network model based on a deep neural network;
the network model training module is used for training a cascade network model based on the deep neural network;
training the cascaded network model based on the deep neural network, which is constructed by the network model construction module, by using the bone visualization image and the labeled bone visualization image which are acquired by the network model construction module;
the focus segmentation module is used for segmenting the bone focus;
and segmenting the femoral lesion in the bone image of the object to be detected by utilizing the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
A computer device comprising a memory storing a computer program and a processor implementing the steps of a deep neural network based bone visualization bone lesion segmentation method as described above when the processor executes the computer program.
The invention has the following beneficial effects:
1. the invention provides a cascade network model based on a deep neural network, which can realize automatic segmentation of bone lesions by training the network model, namely, a bone lesion segmentation result can be obtained by inputting a whole body bone imaging image, extraction, positioning and qualification of femoral lesions by doctors are not needed, the artificial participation degree is lower, artificial intelligence is higher, the segmentation efficiency when the bone lesions are segmented in the bone imaging is greatly improved, and the segmentation effect is improved.
2. In the invention, a double-path architecture is adopted, namely two independent neural networks without shared weight are adopted, and the segmentation result output by the two neural networks in parallel replaces the single segmentation result, so that the stability of a network model can be enhanced at one time; the original mode of directly combining and obtaining results through voting is replaced by a cascading training mode, the output of the two neural networks in parallel and the original input are combined and input into a refining network on a channel, and the refined segmentation result is obtained, so that the problem of high label noise caused by the dispersion characteristic of the bone visualization can be solved.
3. Based on the diffusion characteristic of the bone image, the input of the neural network is high-noise input, and the output obtained based on the high-noise input is unfavorable for training the network model, and the data discarding will cause waste (after all, the collection and labeling of the medical images take a lot of time); therefore, the consistency of the outputs of the two neural networks needs to be calculated, the DSC is used as a consistency evaluation index for each output focus segmentation result, the noise samples are weighted in the mode, only the part with high confidence is used for training, the information in the noise samples can be effectively utilized, and the influence of label noise caused by the diffusion characteristic of the bone visualization is reduced.
Drawings
FIG. 1 is a schematic diagram of the present invention.
Detailed Description
Example 1
The embodiment provides a bone imaging bone lesion segmentation method based on a deep neural network, which is characterized in that a network model is built and trained, and the trained network model can realize automatic segmentation of a bone lesion. As shown in fig. 1, the segmentation method specifically includes the following steps:
the method comprises the following steps of S1, obtaining a bone visualization image, and labeling the bone visualization image;
the network model needs to learn a large amount of data and data labels to construct a reliable mapping from the whole body bone visualization to the bone focus segmentation result.
Acquiring a bone imaging image, namely acquiring a whole-body bone imaging image which is operated and imaged by a professional nuclear medicine technician directly from a hospital system;
marking the bone imaging image, sketching the bone focus region by a low-age physician through 3D scanner, and then auditing, correcting and generating the result by a high-age physician;
aiming at the obtained bone imaging image, data processing is needed, and the processing mode is mainly data augmentation, namely, normalization operation is carried out on the input bone imaging image according to a window width and window level, random overturning, translation and rotation are carried out, and the diversity of training samples is expanded, so that the model learns the characteristics with stronger robustness, and the over-fitting phenomenon of the model is relieved;
to train and test the neural network, the processed data sets are classified into training sets and test sets according to 4.
S2, constructing a cascade network model based on a deep neural network;
the processing object of this application is nuclear medicine image, and the bone video picture, based on the diffuse characteristic of bone video picture itself, very good segmentation label is hardly drawn, and training data contains the noise composition great. Therefore, the method and the device creatively provide and construct a cascade network model based on a dual-path and a cascade architecture based on a deep neural network;
the cascaded network model based on the deep neural network comprises a first neural network, a second neural network and a refined network, key points of the network model are not in the feature extraction capability of the first neural network, the second neural network and the refined network, namely any existing advanced segmentation network model is adopted as a framework for feature extraction, for example, the first neural network, the second neural network and the refined network can adopt a Unet network model for carrying out the segmentation of the whole body bone visualization focus; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone imaging image is simultaneously used as the input of the two neural networks, namely the bone imaging image is used as the input of the first neural network, and a first bone focus segmentation result is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are used as input of a refining network, the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are combined and input into the refining network on a channel, and finally the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result of a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
in the refining network, DSC is used as an evaluation index of consistency for each bone lesion segmentation result output by the first neural network and the second neural network, and DSC values which are higher than a threshold value are reserved and values which are lower than the threshold value are discarded; in this way, the loss calculated for the noise samples is weighted and trained using only the high confidence components, which allows for efficient use of the information in the noise samples.
The DSC index is calculated by the following formula:
Figure 383967DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 823039DEST_PATH_IMAGE002
and with
Figure 161617DEST_PATH_IMAGE003
Respectively representing the number of positive sample pixels predicted by the cascaded network model based on the deep neural network and the number of real positive sample pixels.
S3, training a cascade network model based on a deep neural network;
training the cascade network model based on the deep neural network constructed in the step S2 by using the bone development image obtained in the step S1 and the marked bone development image;
in training the cascaded network model, the training comprises:
step S31, forward calculation;
to one to
Figure 411463DEST_PATH_IMAGE004
The feedforward neural network of a layer is set as a training sample set
Figure 242016DEST_PATH_IMAGE005
Wherein, in the process,
Figure 484779DEST_PATH_IMAGE006
is a dimension of a single sample and is,
Figure 881125DEST_PATH_IMAGE007
represents the number of training samples, then
Figure 551141DEST_PATH_IMAGE008
A sample can be represented as
Figure 938432DEST_PATH_IMAGE009
(ii) a Is provided with the first
Figure 516044DEST_PATH_IMAGE010
First of a layer
Figure 766897DEST_PATH_IMAGE011
A neuron to
Figure 889705DEST_PATH_IMAGE012
First of a layer
Figure 491587DEST_PATH_IMAGE013
The weight of each neuron connection is recorded as
Figure 341732DEST_PATH_IMAGE014
Then, first
Figure 961938DEST_PATH_IMAGE015
Is laminated to
Figure 973756DEST_PATH_IMAGE012
Connection weight matrix of layers
Figure 797356DEST_PATH_IMAGE016
Is provided with
Figure 185612DEST_PATH_IMAGE010
The activation function of neurons on the layer is
Figure 161789DEST_PATH_IMAGE017
From the input layer to the output layer, forward calculation is continuously performed, and the process is as follows:
Figure 547771DEST_PATH_IMAGE018
wherein, the first and the second end of the pipe are connected with each other,
Figure 124246DEST_PATH_IMAGE019
is shown as
Figure 50614DEST_PATH_IMAGE010
Layer neuron activation value, then, the network output layer neuron activation value is:
Figure 130565DEST_PATH_IMAGE020
network output of the last layer is a design performance function
Figure 795771DEST_PATH_IMAGE021
Designing a performance function:
Figure 328383DEST_PATH_IMAGE022
step S32, updating the weight value;
neural networks use cross entropy as an objective function for classification or segmentation tasks, which is defined as follows:
Figure 605912DEST_PATH_IMAGE041
wherein, the first and the second end of the pipe are connected with each other,
Figure 540370DEST_PATH_IMAGE024
and
Figure 330471DEST_PATH_IMAGE025
respectively representing the output and the label of the last layer of the network; the neural network can continuously reduce the value of the target function by solving the gradient of the target function J to the weight and iterating and adopting a gradient reduction algorithm, thereby finding a group of proper weights; the gradient descent algorithm is as follows:
Figure 819221DEST_PATH_IMAGE026
wherein α represents a learning rate constant;
step S33: and (3) testing a model: after the deep neural network model training is completed, the evaluation indexes quantitatively evaluate the recognition effect of the model on the test set, wherein the evaluation indexes comprise TPVF, PPV, DSC and JSC, and are defined as follows:
Figure 87392DEST_PATH_IMAGE027
wherein V S And V G Respectively representing the number of positive sample pixels and the number of real positive sample pixels predicted by a cascade network model based on a deep neural network;
in step S33, the evaluation index further includes sensitivity, F-1 score, precision, which is defined as follows:
Figure 391203DEST_PATH_IMAGE028
wherein
Figure 352206DEST_PATH_IMAGE029
Figure 656148DEST_PATH_IMAGE030
Figure 478742DEST_PATH_IMAGE031
Figure 856633DEST_PATH_IMAGE032
Respectively represent the number of true yang, false yin and true yin.
In step S3, when training the cascaded network model based on the deep neural network, the loss function is:
Figure 722958DEST_PATH_IMAGE042
where M is a mask used to filter out low confidence noise labels,
Figure 983038DEST_PATH_IMAGE034
is a hyper-parameter used to balance the ratio,
Figure 858590DEST_PATH_IMAGE035
;L 1 、L 2 is the cross entropy loss function, L, of the first and second neural networks, respectively r Is a cross entropy loss function of the refined network, and the cross entropy loss function is defined as:
Figure 136994DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 174220DEST_PATH_IMAGE037
is the number of samples to be taken,
Figure 921596DEST_PATH_IMAGE038
is the number of the categories that the user is in,
Figure 335260DEST_PATH_IMAGE039
is a label to be attached to the body,
Figure 422165DEST_PATH_IMAGE040
is the predicted probability of the network output.
S4, dividing a bone disease focus;
and segmenting the femoral lesion in the bone image of the object to be detected by using the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
Example 2
The embodiment provides a bone imaging bone lesion segmentation system based on a deep neural network, which can realize automatic segmentation of a bone lesion by building a network model and training the network model after training. The segmentation system comprises:
the image acquisition and labeling module is used for acquiring a bone visualization image and labeling the bone visualization image;
the network model construction module is used for constructing a cascade network model based on a deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first bone lesion segmentation result is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the bone disease focus first segmentation result and the bone disease focus second segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result in a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
the DSC index is calculated by the formula:
Figure 646604DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 615697DEST_PATH_IMAGE002
and with
Figure 567472DEST_PATH_IMAGE003
Respectively representing the number of positive sample pixels and the number of real positive sample pixels predicted by a cascade network model based on a deep neural network;
the network model training module is used for training a cascade network model based on a deep neural network;
training the cascaded network model based on the deep neural network, which is constructed by the network model construction module, by using the bone visualization image acquired by the network model construction module and the labeled bone visualization image;
the focus segmentation module is used for segmenting the bone focus;
and segmenting the femoral lesion in the bone image of the object to be detected by using the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
Example 3
The present embodiment provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the bone visualization bone lesion segmentation method based on the deep neural network according to embodiment 1 when executing the computer program.
Example 4
The present embodiment provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the method for bone imaging lesion segmentation based on deep neural network according to embodiment 1.

Claims (7)

1. A bone visualization bone lesion segmentation method based on a deep neural network is characterized by comprising the following steps:
s1, acquiring a bone development image, and labeling the bone development image;
s2, constructing a cascade network model based on a deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first bone lesion segmentation result is output; the bone imaging image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result of a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
the DSC index is calculated by the formula:
Figure 636109DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 525568DEST_PATH_IMAGE002
and
Figure 732558DEST_PATH_IMAGE003
respectively representing the pixel number of the positive sample predicted by the cascade network model based on the deep neural network and the pixel number of the real positive sample;
s3, training a cascade network model based on a deep neural network;
training the cascade network model based on the deep neural network constructed in the step S2 by using the bone development image obtained in the step S1 and the marked bone development image;
s4, dividing a bone disease focus;
and segmenting the femoral lesion in the bone image of the object to be detected by utilizing the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
2. The method for bone imaging bone lesion segmentation based on the deep neural network as claimed in claim 1, wherein: in the step S1, data augmentation processing is carried out on the obtained bone imaging image, wherein the data augmentation processing mode comprises random overturning, translation and rotation;
the bone visualization image after data augmentation and the bone visualization image after labeling are divided into a training set and a testing set according to 4.
3. The method for bone imaging bone lesion segmentation based on deep neural network as claimed in claim 1, wherein the training of the cascaded network model based on deep neural network in step S3 comprises:
step S31, forward calculation;
to one to
Figure 49270DEST_PATH_IMAGE004
The feedforward neural network of a layer is set as a training sample set
Figure 708921DEST_PATH_IMAGE005
Wherein, in the step (A),
Figure 402071DEST_PATH_IMAGE006
is a single sampleThe dimension of the device is that the device,
Figure 932409DEST_PATH_IMAGE007
represents the number of training samples, then
Figure 685602DEST_PATH_IMAGE008
A sample can be represented as
Figure 566970DEST_PATH_IMAGE009
(ii) a Is provided with the first
Figure 55022DEST_PATH_IMAGE010
First of a layer
Figure 705446DEST_PATH_IMAGE011
A neuron to
Figure 363960DEST_PATH_IMAGE012
First of a layer
Figure 732625DEST_PATH_IMAGE013
The weight of each neuron connection is recorded as
Figure 767577DEST_PATH_IMAGE014
Then, a first
Figure 803666DEST_PATH_IMAGE015
Is laminated to
Figure 633082DEST_PATH_IMAGE012
Connection weight matrix of layers
Figure 223463DEST_PATH_IMAGE016
Is provided with
Figure 327685DEST_PATH_IMAGE010
The activation function of neurons on the layer is
Figure 687122DEST_PATH_IMAGE017
From the input layer to the output layer, forward calculation is continuously performed, and the process is as follows:
Figure 953019DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 765117DEST_PATH_IMAGE019
is shown as
Figure 407451DEST_PATH_IMAGE010
Layer neuron activation value, then, the network output layer neuron activation value is:
Figure 886974DEST_PATH_IMAGE020
network output of the last layer is a design performance function
Figure 323771DEST_PATH_IMAGE021
Designing a performance function:
Figure 888745DEST_PATH_IMAGE022
step S32, updating the weight value;
the neural network adopts cross entropy as an objective function of a classification or segmentation task, which is defined as follows:
Figure 66261DEST_PATH_IMAGE023
wherein the content of the first and second substances,
Figure 400290DEST_PATH_IMAGE024
and
Figure 273568DEST_PATH_IMAGE025
respectively representing the output and the label of the last layer of the network; the neural network can continuously reduce the value of the target function by solving the gradient of the target function J to the weight and iterating and adopting a gradient descent algorithm, so as to find a group of proper weights; the gradient descent algorithm is as follows:
Figure 325838DEST_PATH_IMAGE026
wherein α represents a learning rate constant;
step S33: and (3) testing a model: after the deep neural network model training is completed, the evaluation indexes quantitatively evaluate the recognition effect of the model on the test set, wherein the evaluation indexes comprise TPVF, PPV, DSC and JSC, and are defined as follows:
Figure 44395DEST_PATH_IMAGE027
wherein V S And V G Respectively representing the pixel number of the positive sample predicted by the cascade network model based on the deep neural network and the pixel number of the real positive sample.
4. The method as claimed in claim 3, wherein the evaluation index further includes sensitivity, F-1 score, precision, which are defined as follows in step S33:
Figure 498510DEST_PATH_IMAGE028
wherein
Figure 277110DEST_PATH_IMAGE029
Figure 816676DEST_PATH_IMAGE030
Figure 338924DEST_PATH_IMAGE031
Figure 647546DEST_PATH_IMAGE032
Respectively represent the number of true yang, false yin and true yin.
5. The method as claimed in claim 1, wherein in step S3, when the cascade network model based on the deep neural network is trained, the loss function is:
Figure 597047DEST_PATH_IMAGE033
where M is a mask used to filter out low confidence noise labels,
Figure 358330DEST_PATH_IMAGE034
is a hyper-parameter used to balance the ratio,
Figure 684269DEST_PATH_IMAGE035
;L 1 、L 2 is the cross entropy loss function, L, of the first and second neural networks, respectively r Is a cross entropy loss function of the refined network, and the cross entropy loss function is defined as:
Figure 847397DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 967800DEST_PATH_IMAGE037
is the number of samples to be taken,
Figure 481958DEST_PATH_IMAGE038
is the number of the categories that the user is in,
Figure 348938DEST_PATH_IMAGE039
is a label to be attached to the electronic device,
Figure 366573DEST_PATH_IMAGE040
is the predicted probability of the network output.
6. A bone visualization bone lesion segmentation system based on a deep neural network, comprising:
the image acquisition and labeling module is used for acquiring a bone development image and labeling the bone development image;
the network model building module is used for building a cascade network model based on the deep neural network;
constructing a cascade network model based on a deep neural network, which comprises a first neural network, a second neural network and a refining network; the first neural network and the second neural network are independent neural networks which are arranged in parallel and have no shared weight, the bone visualization image is used as the input of the first neural network, and a first bone lesion segmentation result is output; the bone visualization image is used as the input of a second neural network, and a second segmentation result of the bone lesion is output; the bone imaging image, the first bone lesion segmentation result and the second bone lesion segmentation result are used as input of a refining network, and the refined segmentation result is output;
in the refining network, calculating DSC indexes of a first bone lesion segmentation result and a second bone lesion segmentation result, and defining the bone lesion segmentation result with the DSC index higher than a threshold value as a reliable lesion segmentation result of a first stage; extracting focuses which are difficult to distinguish in the first neural network and the second neural network based on the input reliable focus segmentation result and the bone imaging image in the first stage, and outputting a refined segmentation result;
the DSC index is calculated by the formula:
Figure 923456DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 659331DEST_PATH_IMAGE002
and with
Figure 327073DEST_PATH_IMAGE003
Respectively representing the pixel number of the positive sample predicted by the cascade network model based on the deep neural network and the pixel number of the real positive sample;
the network model training module is used for training a cascade network model based on a deep neural network;
training the cascaded network model based on the deep neural network, which is constructed by the network model construction module, by using the bone visualization image and the labeled bone visualization image which are acquired by the network model construction module;
the focus segmentation module is used for segmenting bone focuses;
and segmenting the femoral lesion in the bone image of the object to be detected by utilizing the trained cascade network model based on the deep neural network to obtain a bone lesion segmentation result.
7. A computer device comprising a memory storing a computer program and a processor, wherein the processor when executing the computer program performs the steps of a method for bone visualization bone lesion segmentation based on deep neural networks as claimed in any one of claims 1 to 5.
CN202210941269.4A 2022-08-08 2022-08-08 Bone imaging bone lesion segmentation method, system and equipment based on deep neural network Active CN115019049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210941269.4A CN115019049B (en) 2022-08-08 2022-08-08 Bone imaging bone lesion segmentation method, system and equipment based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210941269.4A CN115019049B (en) 2022-08-08 2022-08-08 Bone imaging bone lesion segmentation method, system and equipment based on deep neural network

Publications (2)

Publication Number Publication Date
CN115019049A CN115019049A (en) 2022-09-06
CN115019049B true CN115019049B (en) 2022-12-16

Family

ID=83066074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210941269.4A Active CN115019049B (en) 2022-08-08 2022-08-08 Bone imaging bone lesion segmentation method, system and equipment based on deep neural network

Country Status (1)

Country Link
CN (1) CN115019049B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115261963A (en) * 2022-09-27 2022-11-01 南通如东依航电子研发有限公司 Method for improving deep plating capability of PCB
CN116188469A (en) * 2023-04-28 2023-05-30 之江实验室 Focus detection method, focus detection device, readable storage medium and electronic equipment
CN117036376B (en) * 2023-10-10 2024-01-30 四川大学 Lesion image segmentation method and device based on artificial intelligence and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035263A (en) * 2018-08-14 2018-12-18 电子科技大学 Brain tumor image automatic segmentation method based on convolutional neural networks
CN110555856A (en) * 2019-09-09 2019-12-10 成都智能迭迦科技合伙企业(有限合伙) Macular edema lesion area segmentation method based on deep neural network
CN112102339A (en) * 2020-09-21 2020-12-18 四川大学 Whole-body bone imaging bone segmentation method based on atlas registration
CN112308853A (en) * 2020-10-20 2021-02-02 平安科技(深圳)有限公司 Electronic equipment, medical image index generation method and device and storage medium
CN114037072A (en) * 2021-10-11 2022-02-11 浙江大华技术股份有限公司 Neural network optimization method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11568627B2 (en) * 2015-11-18 2023-01-31 Adobe Inc. Utilizing interactive deep learning to select objects in digital visual media

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035263A (en) * 2018-08-14 2018-12-18 电子科技大学 Brain tumor image automatic segmentation method based on convolutional neural networks
CN110555856A (en) * 2019-09-09 2019-12-10 成都智能迭迦科技合伙企业(有限合伙) Macular edema lesion area segmentation method based on deep neural network
CN112102339A (en) * 2020-09-21 2020-12-18 四川大学 Whole-body bone imaging bone segmentation method based on atlas registration
CN112308853A (en) * 2020-10-20 2021-02-02 平安科技(深圳)有限公司 Electronic equipment, medical image index generation method and device and storage medium
CN114037072A (en) * 2021-10-11 2022-02-11 浙江大华技术股份有限公司 Neural network optimization method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Harnessing 2D Networks and 3D Features for Automated Pancreas Segmentation from Volumetric CT Images;huanchen等;《Medical Image Computing and Computer Assisted Intervention – MICCAI 2019》;20191010;339–347 *
一种高精度、低开销的单包溯源方法;鲁宁等;《软件学报》;20171015(第10期);217-236 *
基于判别关键域和深度学习的植物图像分类;张雪芹等;《计算机工程与设计》;20200316(第03期);150-156 *
基于改进深度学习的乳腺癌医学影像检测方法;陈彤;《现代计算机》;20200515(第14期);36-40+45 *
基于深度学习的图像语义分割研究与应用;彭超;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20190215(第2期);I138-1556 *
基于高斯混合模型低对比度SPECT图像分割算法的研究;张战胜;《中国医学工程》;20211125;第29卷(第11期);9-12 *

Also Published As

Publication number Publication date
CN115019049A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN115019049B (en) Bone imaging bone lesion segmentation method, system and equipment based on deep neural network
US20200303062A1 (en) Medical image aided diagnosis method and system combining image recognition and report editing
CN108898595B (en) Construction method and application of positioning model of focus region in chest image
CN108464840B (en) Automatic detection method and system for breast lumps
CN110288597B (en) Attention mechanism-based wireless capsule endoscope video saliency detection method
CN109523535B (en) Pretreatment method of lesion image
CN109858540B (en) Medical image recognition system and method based on multi-mode fusion
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN113781439B (en) Ultrasonic video focus segmentation method and device
CN112101451A (en) Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks
CN115345819A (en) Gastric cancer image recognition system, device and application thereof
CN113284149A (en) COVID-19 chest CT image identification method and device and electronic equipment
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
Tang et al. CNN-based qualitative detection of bone mineral density via diagnostic CT slices for osteoporosis screening
CN112508884A (en) Comprehensive detection device and method for cancerous region
Radha Analysis of COVID-19 and pneumonia detection in chest X-ray images using deep learning
CN112419246B (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
CN110782441A (en) DR image pulmonary tuberculosis intelligent segmentation and detection method based on deep learning
Cao et al. X-ray classification of tuberculosis based on convolutional networks
Vasconcelos et al. A new risk assessment methodology for dermoscopic skin lesion images
Tang et al. CNN-based automatic detection of bone conditions via diagnostic CT images for osteoporosis screening
CN110827275A (en) Liver nuclear magnetic artery phase image quality grading method based on raspberry group and deep learning
Li et al. MVDI25K: A large-scale dataset of microscopic vaginal discharge images
Saglam et al. COVID-19 Detection from X-ray Images Using a New CNN Approach
Wan et al. Recognition of Cheating Behavior in Examination Room Based on Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant