CN111539963B - Bone scanning image hot spot segmentation method, system, medium and device - Google Patents

Bone scanning image hot spot segmentation method, system, medium and device Download PDF

Info

Publication number
CN111539963B
CN111539963B CN202010251600.0A CN202010251600A CN111539963B CN 111539963 B CN111539963 B CN 111539963B CN 202010251600 A CN202010251600 A CN 202010251600A CN 111539963 B CN111539963 B CN 111539963B
Authority
CN
China
Prior art keywords
dimensional
hot spot
characteristic
segmentation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010251600.0A
Other languages
Chinese (zh)
Other versions
CN111539963A (en
Inventor
乔宇
徐航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010251600.0A priority Critical patent/CN111539963B/en
Publication of CN111539963A publication Critical patent/CN111539963A/en
Application granted granted Critical
Publication of CN111539963B publication Critical patent/CN111539963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The invention provides a method, a system, a medium and equipment for segmenting hot spots of a bone scanning image, wherein the method comprises the following steps: step S1: obtaining a 4-dimensional position feature vector; step S2: combining the 4-dimensional position characteristic, the 33-dimensional texture characteristic and the 1-dimensional neighborhood contrast characteristic into an artificial characteristic of the bone scanning image; step S3: constructing a 38-dimensional feature; step S4: training a small block level classifier by using MIL to obtain a probability distribution map of hot spots, and obtaining an initial contour similar to a segmentation target by threshold segmentation; step S5: and obtaining a bone scanning image hot spot segmentation result by using level set evolution, and obtaining bone scanning image hot spot segmentation result information. The invention can utilize CGAN to obtain the position characteristic and combine the position characteristic, the texture characteristic and the contrast characteristic into the artificial characteristic of the bone scanning image.

Description

Bone scan image hot spot segmentation method, system, medium and device
Technical Field
The invention relates to the field of medical image segmentation, in particular to a bone scan image hotspot segmentation method, a system, a medium and equipment, and particularly relates to a bone scan image hotspot segmentation method based on condition generation countermeasure network and multi-instance learning.
Background
Hot spot segmentation of bone scan images is an indispensable tool for clinical diagnosis of tumors and their associated bone diseases. At the time of detection, the intensity of the tumor in the image was much higher than in other regions, which were called "hot spots" in the study. With the progress of computer technology in recent years, research on hot spot segmentation of bone scan images has been greatly developed.
In 2004, Yin et al segmented out potential hot spots using local maxima; in 2007, Huang et al divide a bone scan image into 23 human body regions, and then establish a linear regression analysis model through the gray level mean and standard deviation of the regions to realize hot point division; in 2008, May Sadik and the like complete image partitioning by utilizing human anatomy prior knowledge, and select a region threshold segmentation hotspot according to a partitioning result; in 2011, Wang et al used adaptive threshold segmentation for hot spots in the spine and rib regions, respectively; in 2011, Wang et al used adaptive threshold segmentation for hot spots in spine and rib regions, respectively; in 2016, Geng et al used multi-instance learning to find hot-spot probability maps, and then segmented by a level set method.
Patent document CN110443792A discloses a bone scan image processing method and system based on a parallel deep neural network, which belongs to the technical field of bone scan image processing, and includes an image acquisition module, an image processing module for preprocessing and image segmenting a bone scan image, acquiring a segmented part image, establishing a parallel deep neural network model, training the parallel deep neural network model according to the segmented part image, obtaining a trained model training module of the parallel deep neural network model, performing feature extraction on the segmented part image according to the trained parallel deep neural network model, detecting a part with a hot spot, classifying the segmented part image according to the part with the hot spot, and marking a hot zone part and a normal zone part on the bone scan image. There is still room for improvement in the effect of the image processing by bone scan.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a bone scan image hot spot segmentation method, a system, a medium and equipment.
The invention provides a bone scanning image hot spot segmentation method, which comprises the following steps: step S1: dividing the bone scanning image into 4 areas according to human anatomy knowledge by using a pix2pix model in the CGAN, thereby obtaining a 4-dimensional position feature vector; step S2: combining the 4-dimensional position feature, the 33-dimensional texture feature and the 1-dimensional neighborhood contrast feature into an artificial feature of the bone scanning image; step S3: dividing a bone scanning image into 4 areas by using CGAN, providing position characteristics, and constructing a 38-dimensional characteristic by including the position characteristics; step S4: training a small block level classifier by using MIL to obtain a probability distribution map of hot spots, and obtaining an initial contour similar to a segmentation target through threshold segmentation; step S5: and obtaining a bone scanning image hot spot segmentation result by using level set evolution, and obtaining bone scanning image hot spot segmentation result information. The innovation of the method is that the CGAN is used for obtaining the position characteristics, and the position characteristics, the texture characteristics and the contrast characteristics are combined into the artificial characteristics of the bone scanning image.
Preferably, the step S2 includes: step S2.1, providing 4-dimensional position characteristics according to the region division result.
Preferably, the step S2 includes: s2.2, forming 33-dimensional texture features by a simplified Leung-Malik filter bank; the Leung-Malik filter comprises: 24 directional filters, 6 gaussian laplacian filters and 3 gaussian filters.
Preferably, the step S2 includes: and S2.3, obtaining neighborhood contrast characteristics by calculating the chi-square distance between the current region and the symmetric region.
The invention provides a bone scanning image hot spot segmentation system, which comprises: module M1: dividing the bone scanning image into 4 areas according to human anatomy knowledge by using a pix2pix model in the CGAN, thereby obtaining a 4-dimensional position feature vector; module M2: combining the 4-dimensional position feature, the 33-dimensional texture feature and the 1-dimensional neighborhood contrast feature into an artificial feature of the bone scanning image; module M3: dividing the bone scanning image into 4 areas by using CGAN, providing position features, including the position features, and constructing a 38-dimensional feature; module M4: training a small block level classifier by using MIL to obtain a probability distribution map of hot spots, and obtaining an initial contour similar to a segmentation target through threshold segmentation; module M5: and obtaining a bone scanning image hot spot segmentation result by using level set evolution, and obtaining bone scanning image hot spot segmentation result information. The system is characterized in that the CGAN is used for obtaining position characteristics, and the position characteristics, the texture characteristics and the contrast characteristics are combined into artificial characteristics of the bone scanning image.
Preferably, said module M2 comprises: module M2.1 providing 4-dimensional location features from the region partitioning results.
Preferably, said module M2 comprises: a module M2.2, wherein a simplified Leung-Malik filter bank forms 33-dimensional texture characteristics; the Leung-Malik filter comprises: 24 directional filters, 6 gaussian laplacian filters and 3 gaussian filters.
Preferably, said module M2 comprises: and a module M2.3, obtaining neighborhood contrast characteristics by calculating the chi-square distance between the current region and the symmetric region.
According to the present invention, there is provided a computer readable storage medium storing a computer program, which when executed by a processor, implements the steps of the bone scan image hot spot segmentation method.
The invention provides a bone scanning image hot spot segmentation device, which comprises: a controller; the controller comprises a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the bone scan image hot spot segmentation method; alternatively, the controller comprises a bone scan image hot spot segmentation system.
Compared with the prior art, the invention has the following beneficial effects:
1. the present invention uses Jaccard, Dice, F1-score, FN-rate index as a measure according to the convention and compares them with other methods;
2. the method has better performance on the segmentation accuracy, has obvious advantages compared with other algorithms, and can accurately segment the outline of the hot spot;
3. the invention can utilize CGAN to obtain the position characteristic and combine the position characteristic, the texture characteristic and the contrast characteristic into the artificial characteristic of the bone scanning image.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a simplified flow diagram of the present invention.
FIG. 2 is a detailed block diagram comparing hot spot segmentation method combining CGAN and MIL according to the present invention.
Fig. 3 is a schematic diagram comparing the example of dividing the bone scan image region according to the embodiment of the present invention.
Fig. 4 is a schematic diagram comparing training principles of the generator in the CGAN according to the embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating comparison of region segmentation of a bone scan image by using a trained generator according to an embodiment of the present invention.
FIG. 6 is a schematic diagram comparing the principles of training classifiers in MILs in the embodiment of the present invention.
Fig. 7 is a schematic diagram comparing the segmentation results in the representative bone scan image according to the embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
As shown in fig. 1 to 7, a method for segmenting a hot spot of a bone scan image according to the present invention includes: step S1: dividing the bone scanning image into 4 areas according to human anatomy knowledge by using a pix2pix model in the CGAN, thereby obtaining a 4-dimensional position feature vector; step S2: combining the 4-dimensional position characteristic, the 33-dimensional texture characteristic and the 1-dimensional neighborhood contrast characteristic into an artificial characteristic of the bone scanning image; step S3: dividing the bone scanning image into 4 areas by using CGAN, providing position features, including the position features, and constructing a 38-dimensional feature; step S4: training a small block level classifier by using MIL to obtain a probability distribution map of hot spots, and obtaining an initial contour similar to a segmentation target by threshold segmentation; step S5: and obtaining a hot spot segmentation result of the bone scanning image by using level set evolution, and obtaining hot spot segmentation result information of the bone scanning image. The method is characterized in that the CGAN is used for obtaining position characteristics, and the position characteristics, the texture characteristics and the contrast characteristics are combined into artificial characteristics of the bone scanning image.
The level set method is a common method for medical image segmentation, but hot spot boundaries in a bone scan image are fuzzy, low in contrast and dispersed and irregular in targets, so that an initial contour similar to the targets needs to be obtained first, and accurate segmentation results are easily obtained by using level set evolution on the basis.
Preferably, the step S2 includes: step S2.1, providing 4-dimensional position characteristics according to the region division result.
Preferably, the step S2 includes: s2.2, forming 33-dimensional texture features by a simplified Leung-Malik filter bank; the Leung-Malik filter comprises: 24 directional filters, 6 gaussian laplacian filters and 3 gaussian filters.
Preferably, the step S2 includes: and S2.3, acquiring neighborhood contrast characteristics by calculating the chi-square distance between the current region and the symmetric region thereof.
Specifically, in one embodiment, a bone scan image hot spot segmentation method combines CGAN and MIL. Firstly, a bone scanning image is divided into 4 areas by using CGAN according to human anatomy knowledge, so that a 4-dimensional position characteristic vector is obtained. Including the location features, a 38-dimensional feature is constructed, and then a patch-level classifier is trained with MILs on this basis. Finally, a probability distribution map of the hot spot can be obtained, and the segmented initial contour is obtained through threshold segmentation. The innovation of the method is that the CGAN is used for obtaining the position characteristics to supplement and perfect the artificial characteristics.
The region partitioning problem can be viewed as generating a corresponding output image from the input image. The bone scan image is an input image, and the region division result map is an output image. The invention is realized by using a pix2pix model in the CGAN, wherein a generator and a discriminator in the model both observe an input image, and the generator adopts a U-net structure instead of a coder-decoder structure commonly used by the CGAN. The generator learns the mapping from input to input and the arbiter learns a loss function by comparing the training of the predicted output to the given output difference. Finally, by optimizing a loss function combining cross entropy and antagonistic items, the training and optimization of the network can be realized. The formula for the loss function is as follows:
Figure BDA0002435691330000051
where G is the prediction output, D is the discriminator output, λ is the regularization coefficient, LCGAN(G, D) is a loss function, adding the L1 distance term to ensure similarity. Two specific formulas are as follows:
Figure BDA0002435691330000052
where x, y, z represent the input image, the given output, and the random variable, respectively. G (x, z) is the prediction output, and the discriminator output score is D (x, y), D (x, G (x, z)), E [ ·]Representing mathematical expectation, | |)1Indicating the L1 distance. L isL1(G) The L1 distance of the predicted output and the given output is calculated.
After the network training is finished, the generator only needs to input the bone scanning image for testing to the trained generator network, and the generator outputs a predicted region division result graph.
And extracting artificially constructed 38-dimensional image features including 4-dimensional position features, 33-dimensional texture features and 1-dimensional contrast features from the local image small blocks.
The 4-dimensional position feature is obtained from the region division result, and the formula is as follows.
Figure BDA0002435691330000053
Wherein, Loc,
Figure BDA0002435691330000054
And the sum of the position characteristic vectors is represented, k represents the index of the characteristic vector, and i represents the index of a pixel in the local image.
The 1-dimensional neighborhood contrast feature calculates the chi-squared distance of the current region and its symmetric region:
Figure BDA0002435691330000055
where M, S is the grayscale histogram statistics of the current region and the symmetric region, and i is the index of the pixels in the region.
The 33-dimensional texture features adopt a simplified mode designed by a Leung-Malik filter bank, comprise 24 directional filters, 6 Gaussian Laplace filters and 3 Gaussian filters, and calculate 33-dimensional feature vectors as the texture features.
In multi-instance learning, the training set contains positive and negative packets, each containing some unlabeled instances, where at least one instance is a positive sample, and all instances are negative samples. That is, if one of the samples in a packet is a positive sample, the packet is a positive packet; if none of the positive samples is present, the packet is a negative packet.
A package may be denoted Xi={xi1,…,ximI is the index of the packet, and j is the index of the example. The label of the package may be represented as yiE { -1,1}, denoted as implicit label y since the label of the example is not explicitly labeledijE { -1,1 }. Label y of bagiAnd label y exemplified in the packageijHas the following relationship:
Figure BDA0002435691330000061
the goal of the multi-example learning algorithm is to learn an example level of classifier h (x)ij). Packet level classifier H (x)i) Can be represented by example levels of classifiers:
Figure BDA0002435691330000062
the MILBoost algorithm solves the multi-example learning problem by using a lifting method, and a series of weak classifiers h are trained by optimizing a loss function L (h) through a gradient descent methodtAnd combining these weak classifiers into a strong classifier h:
Figure BDA0002435691330000063
in the formula xijAs an example, α is the weight of the weak classifier and t is the index. The loss function takes the negative logarithmic form:
Figure BDA0002435691330000064
wherein p isiThe probability that the packet i is positive is calculated through a generalized mean softmax function:
Figure BDA0002435691330000065
pijis the probability that the example j in packet i is positive, calculated by sigmoid function:
Figure BDA0002435691330000066
wherein h isij=∑tαtht(xij) Is the output of the last classifier. Next, a gradient descent method is used to calculate the weights:
Figure BDA0002435691330000067
where w is the weight, L is the loss function, and y is the label of the example. The absolute value of the weights is then used to find the best weak classifier:
Figure BDA0002435691330000068
in the formula, htIs the optimal weak classifier, hcandidateIs the candidate weak classifier, x is an example, and w is a weight. And then, solving the weight of the weak classifier through linear search:
Figure BDA0002435691330000069
where h is the last iteration strong classifier, htIs the optimal weak classifier found in a new iteration, alpha is the weight of the weak classifier, alpha istIs the optimal weight. Finally, a new weak classifier is added to the strong classifier as an update of the classifier. When the training is converged, the final classifier h (x) can be obtainedij). Then, the classification result of the whole picture can form a probability graph of the hot spot.
Setting a threshold value on the basis of the hot spot probability map, obtaining a segmented initial contour curve:
Figure BDA0002435691330000071
in the formula, phi0Is the level set value, x is the pixel, p is the hot spot probability, and threshold is the threshold. And then, optimizing the energy function of the level set by using a local symbol level set segmentation method and a gradient descent method to obtain an optimal value, thereby obtaining a final segmentation curve.
The invention provides a bone scanning image hot spot segmentation system, which comprises: module M1: dividing a bone scanning image into 4 areas according to human anatomy knowledge by using a pix2pix model in CGAN (joint gradient analysis), thereby obtaining a 4-dimensional position feature vector; module M2: combining the 4-dimensional position characteristic, the 33-dimensional texture characteristic and the 1-dimensional neighborhood contrast characteristic into an artificial characteristic of the bone scanning image; module M3: dividing a bone scanning image into 4 areas by using CGAN, providing position characteristics, and constructing a 38-dimensional characteristic by including the position characteristics; module M4: training a small block level classifier by using MIL to obtain a probability distribution map of hot spots, and obtaining an initial contour similar to a segmentation target by threshold segmentation; module M5: and obtaining a bone scanning image hot spot segmentation result by using level set evolution, and obtaining bone scanning image hot spot segmentation result information. The system is characterized in that the CGAN is used for obtaining position characteristics, and the position characteristics, the texture characteristics and the contrast characteristics are combined into artificial characteristics of the bone scanning image.
Preferably, said module M2 comprises: module M2.1 providing 4-dimensional location features from the region partitioning results.
Preferably, said module M2 comprises: a module M2.2, wherein a simplified Leung-Malik filter bank forms 33-dimensional texture characteristics; the Leung-Malik filter comprises: 24 directional filters, 6 gaussian laplacian filters and 3 gaussian filters.
Preferably, said module M2 comprises: and a module M2.3, acquiring neighborhood contrast characteristics by calculating the chi-square distance between the current region and the symmetric region thereof.
The hot spot segmentation result effect of the invention is shown in the following table:
Method Jaccard Dice
region growing 0.5853 0.6935
Adaptive threshold 0.3729 0.5049
Set of local symbol levels 0.5677 0.9029
MIL classifier 0.6493 0.7717
The invention 0.7253 0.8319
According to the present invention, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the bone scan image hot spot segmentation method.
The invention provides a bone scanning image hot spot segmentation device, which comprises: a controller; the controller comprises a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of a bone scan image hotspot segmentation method; alternatively, the controller comprises a bone scan image hot spot segmentation system.
The conditional generation countermeasure network (CGAN) belongs to a generation countermeasure network (GAN), and is a deep learning model. The model passes through two modules in the frame: and generating a model and a discrimination model, and performing antagonistic learning of the model and the discrimination model to generate a quite good output. In the original GAN theory, the generator and the discriminator are not required to be neural networks, but only functions capable of fitting corresponding generation and discrimination are required, but a deep neural network is generally used in practice. An excellent GAN requires good training methods, otherwise the output may be unsatisfactory due to the freedom of neural network models.
Multi-instance learning (MIL) is a semi-supervised learning method that is widely used. The training set comprises a series of packets, each packet contains a plurality of examples, and the training set provides the labeling information of the packets but does not provide the labeling information of each specific example. The multi-example algorithm utilizes the labeled information of the package to understand the training data through learning, and completes the classification of the package and the example two levels.
The level set method is always in the field of image segmentation, and the method is based on a dynamic boundary model and has a good effect in the segmentation of medical images. The basic idea of the algorithm is to separate a target from an image, evolve a set consisting of a series of points under the constraint of the change of the gray value of the image, and form a contour which is a segmentation curve and is used as the boundary of the target.
After the algorithm is operated to obtain a segmentation result, a target area in the image is obtained, the performance of the segmentation algorithm is evaluated by comparing the area with a given real target area (ground route), and quantitative evaluation is generally performed by using Jaccard and Dice index. The calculation formula of the two indexes is as follows.
Figure BDA0002435691330000081
Where S is the segmentation result, G is ground truth, U and ≧ n respectively denote union and intersection, and | represents the number of pixels in the corresponding region. The value range of the index is [0,1], and the larger the value is, the more accurate the segmentation result is.
According to the conventional practice, Jaccard, Dice, F1-score and FN-rate indexes are used as measuring standards, and compared with other methods, the method has better performance in segmentation accuracy, has obvious advantages compared with other algorithms, and can accurately segment the outline of the hot spot.
It is obvious to a person skilled in the art that, except for the implementation of the system provided by the present invention and its respective devices, units, modules in a purely computer-readable program code, the system provided by the present invention and its respective devices, units, modules can be implemented with the same functionality in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like by programming the method steps logically. Therefore, the system and various devices, units and modules thereof provided by the invention can be regarded as a hardware component, and the devices, units and modules included in the system and used for realizing various functions can also be regarded as structures in the hardware component; means, elements, modules for performing the various functions may also be regarded as structures within both software elements and hardware components of the implementing method.
The foregoing description has described specific embodiments of the present invention. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (6)

1. A bone scan image hot spot segmentation method is characterized by comprising the following steps:
step S1: dividing the bone scanning image into 4 areas according to human anatomy knowledge by using a pix2pix model in the CGAN so as to obtain a 4-dimensional position feature vector;
step S2: combining the 4-dimensional position characteristic, the 33-dimensional texture characteristic and the 1-dimensional neighborhood contrast characteristic into an artificial characteristic of the bone scanning image;
step S3: dividing a bone scanning image into 4 areas by using a pix2pix model in CGAN (Carrier-grade image acquisition), providing position characteristics, and constructing a 38-dimensional characteristic; extracting artificially constructed 38-dimensional image features including 4-dimensional position features, 33-dimensional texture features and 1-dimensional neighborhood contrast features from the local image patches;
step S4: training a small block level classifier by using MIL to obtain a probability distribution map of hot spots, and obtaining an initial contour similar to a segmentation target through threshold segmentation;
step S5: obtaining a bone scanning image hot spot segmentation result by using level set evolution, and acquiring bone scanning image hot spot segmentation result information;
the 4-dimensional position feature is obtained according to the region division result, and the formula is as follows:
Figure FDA0003585459430000011
Figure FDA0003585459430000012
wherein, Loc,
Figure FDA0003585459430000013
Respectively representing position characteristic vectors and values, k representing indexes of the characteristic vectors, and i representing indexes of pixels in the local image;
setting a threshold value on the basis of the hot spot probability distribution diagram to obtain a segmented initial contour curve:
Figure FDA0003585459430000014
in the formula, phi0Is the level set number, x is the pixel, p is the hotspot probability, threshold is the threshold; then, optimizing an energy function of the level set by using a local symbol level set segmentation method and a gradient descent method to obtain an optimal value, thereby obtaining a final segmentation curve;
the step S2 includes: s2.1, providing 4-dimensional position characteristics according to the region division result;
s2.2, forming 33-dimensional texture features by a Leung-Malik filter bank;
the Leung-Malik filter comprises: 24 directional filters, 6 gaussian laplacian filters and 3 gaussian filters.
2. The bone scan image hot spot segmentation method according to claim 1, wherein the step S2 includes: and S2.3, acquiring neighborhood contrast characteristics by calculating the chi-square distance between the current region and the symmetric region.
3. A system for hot spot segmentation of a bone scan image, comprising:
module M1: dividing the bone scanning image into 4 areas according to the human anatomy knowledge by using a pix2pix model in the CGAN, thereby obtaining a 4-dimensional position feature vector;
module M2: combining the 4-dimensional position characteristic, the 33-dimensional texture characteristic and the 1-dimensional neighborhood contrast characteristic into an artificial characteristic of the bone scanning image;
module M3: dividing a bone scanning image into 4 areas by using a pix2pix model in the CGAN, providing position characteristics, and constructing a 38-dimensional characteristic; extracting artificially constructed 38-dimensional image features including 4-dimensional position features, 33-dimensional texture features and 1-dimensional field contrast features from the local image small blocks;
module M4: training a small block level classifier by using MIL to obtain a probability distribution map of hot spots, and obtaining an initial contour similar to a segmentation target through threshold segmentation;
module M5: obtaining a bone scanning image hot spot segmentation result by using level set evolution, and acquiring bone scanning image hot spot segmentation result information;
the 4-dimensional position feature is obtained according to the region division result, and the formula is as follows:
Figure FDA0003585459430000021
Figure FDA0003585459430000022
wherein, Loc,
Figure FDA0003585459430000023
Respectively representing a position feature vector and a value, k representing an index of the feature vector, and i representing an index of a pixel in a local image;
setting a threshold value on the basis of the hot spot probability distribution diagram to obtain a segmented initial contour curve:
Figure FDA0003585459430000024
in the formula, phi0Is the level set value, x is the pixel, p is the hot spot probability, threshold is the threshold; then adopting a local symbol level set segmentation method and utilizing gradient descentOptimizing an energy function of the level set to obtain an optimal value, thereby obtaining a final segmentation curve;
the module M2 includes:
a module M2.1 providing 4-dimensional position features from the region division results;
a module M2.2, wherein 33-dimensional texture features are formed by a Leung-Malik filter bank;
the Leung-Malik filter comprises: 24 directional filters, 6 gaussian laplacian filters and 3 gaussian filters.
4. The bone scan image hot spot segmentation system of claim 3, wherein the module M2 comprises: and a module M2.3, obtaining the neighborhood contrast characteristic by calculating the chi-square distance between the current region and the symmetric region.
5. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the bone scan image hot spot segmentation method according to any one of claims 1 to 2.
6. A bone scan image hotspot segmentation device, comprising: a controller;
the controller comprises a computer readable storage medium of claim 5 having stored thereon a computer program which, when executed by a processor, implements the steps of the bone scan image hot spot segmentation method of any one of claims 1 to 2; alternatively, the controller comprises the bone scan image hot spot segmentation system of any one of claims 3 to 4.
CN202010251600.0A 2020-04-01 2020-04-01 Bone scanning image hot spot segmentation method, system, medium and device Active CN111539963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010251600.0A CN111539963B (en) 2020-04-01 2020-04-01 Bone scanning image hot spot segmentation method, system, medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010251600.0A CN111539963B (en) 2020-04-01 2020-04-01 Bone scanning image hot spot segmentation method, system, medium and device

Publications (2)

Publication Number Publication Date
CN111539963A CN111539963A (en) 2020-08-14
CN111539963B true CN111539963B (en) 2022-07-15

Family

ID=71952114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010251600.0A Active CN111539963B (en) 2020-04-01 2020-04-01 Bone scanning image hot spot segmentation method, system, medium and device

Country Status (1)

Country Link
CN (1) CN111539963B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096804A (en) * 2010-12-08 2011-06-15 上海交通大学 Method for recognizing image of carcinoma bone metastasis in bone scan
CN106373168A (en) * 2016-11-24 2017-02-01 北京三体高创科技有限公司 Medical image based segmentation and 3D reconstruction method and 3D printing system
CN109544518A (en) * 2018-11-07 2019-03-29 中国科学院深圳先进技术研究院 A kind of method and its system applied to the assessment of skeletal maturation degree
CN110443792A (en) * 2019-08-06 2019-11-12 四川医联信通医疗科技有限公司 A kind of bone scanning image processing method and system based on parallel deep neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE47609E1 (en) * 2007-12-28 2019-09-17 Exini Diagnostics Ab System for detecting bone cancer metastases

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096804A (en) * 2010-12-08 2011-06-15 上海交通大学 Method for recognizing image of carcinoma bone metastasis in bone scan
CN106373168A (en) * 2016-11-24 2017-02-01 北京三体高创科技有限公司 Medical image based segmentation and 3D reconstruction method and 3D printing system
CN109544518A (en) * 2018-11-07 2019-03-29 中国科学院深圳先进技术研究院 A kind of method and its system applied to the assessment of skeletal maturation degree
CN110443792A (en) * 2019-08-06 2019-11-12 四川医联信通医疗科技有限公司 A kind of bone scanning image processing method and system based on parallel deep neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
COMBINING CGAN AND MIL FOR HOTSPOT SEGMENTATION IN BONE SCINTIGRAPHY;Hang Xu et.al;《ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)》;20200514;第1404-1408页 *
Combining CNN and MIL to Assist Hotspot Segmentation in Bone Scintigraphy;Shijie Geng et.al;《ICONIP 2015》;20151231;第445-452页 *

Also Published As

Publication number Publication date
CN111539963A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN109614985B (en) Target detection method based on densely connected feature pyramid network
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN110705555B (en) Abdomen multi-organ nuclear magnetic resonance image segmentation method, system and medium based on FCN
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN109145979B (en) Sensitive image identification method and terminal system
Ramakrishna et al. Pose machines: Articulated pose estimation via inference machines
CN107633226B (en) Human body motion tracking feature processing method
CN110796168A (en) Improved YOLOv 3-based vehicle detection method
CN114841257B (en) Small sample target detection method based on self-supervision comparison constraint
CN109636846B (en) Target positioning method based on cyclic attention convolution neural network
EP3671555A1 (en) Object shape regression using wasserstein distance
CN112464911A (en) Improved YOLOv 3-tiny-based traffic sign detection and identification method
CN112163599B (en) Image classification method based on multi-scale and multi-level fusion
CN106157330B (en) Visual tracking method based on target joint appearance model
Nikbakhsh et al. A novel approach for unsupervised image segmentation fusion of plant leaves based on G-mutual information
Abdelsamea et al. A SOM-based Chan–Vese model for unsupervised image segmentation
CN108010048A (en) A kind of hippocampus dividing method of the automatic brain MRI image based on multichannel chromatogram
CN108256462A (en) A kind of demographic method in market monitor video
CN107818299A (en) Face recognition algorithms based on fusion HOG features and depth belief network
Wang et al. Multilevel thresholding using a modified ant lion optimizer with opposition-based learning for color image segmentation
CN111784653A (en) Multi-scale network MRI pancreas contour positioning method based on shape constraint
CN116884623B (en) Medical rehabilitation prediction system based on laser scanning imaging
Belagiannis et al. Holistic human pose estimation with regression forests
Yang et al. Color texture segmentation based on image pixel classification
Bhimavarapu et al. Analysis and characterization of plant diseases using transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant