CN110443792A - A kind of bone scanning image processing method and system based on parallel deep neural network - Google Patents
A kind of bone scanning image processing method and system based on parallel deep neural network Download PDFInfo
- Publication number
- CN110443792A CN110443792A CN201910720428.6A CN201910720428A CN110443792A CN 110443792 A CN110443792 A CN 110443792A CN 201910720428 A CN201910720428 A CN 201910720428A CN 110443792 A CN110443792 A CN 110443792A
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- deep neural
- bone scanning
- parallel deep
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000000988 bone and bone Anatomy 0.000 title claims abstract description 152
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 39
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000003062 neural network model Methods 0.000 claims abstract description 66
- 238000012545 processing Methods 0.000 claims abstract description 50
- 238000012549 training Methods 0.000 claims abstract description 26
- 238000003709 image segmentation Methods 0.000 claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 230000011218 segmentation Effects 0.000 claims description 18
- 230000004069 differentiation Effects 0.000 claims description 12
- 238000005520 cutting process Methods 0.000 claims description 10
- 239000000284 extract Substances 0.000 claims description 8
- 238000000638 solvent extraction Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 7
- 230000002285 radioactive effect Effects 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000002224 dissection Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 abstract description 6
- 238000000034 method Methods 0.000 description 20
- 230000015654 memory Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 4
- 210000004218 nerve net Anatomy 0.000 description 3
- 210000004872 soft tissue Anatomy 0.000 description 3
- 210000003423 ankle Anatomy 0.000 description 2
- 238000012550 audit Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007469 bone scintigraphy Methods 0.000 description 1
- 239000000571 coke Substances 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 210000002411 hand bone Anatomy 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000009206 nuclear medicine Methods 0.000 description 1
- 210000004197 pelvis Anatomy 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G06T5/90—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E30/00—Energy generation of nuclear origin
- Y02E30/30—Nuclear fission reactors
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of bone scanning image processing methods and system based on parallel deep neural network, belong to bone scanning technical field of image processing, including image collection module, pretreatment and image segmentation are carried out to bone scanning image, obtain the image processing module of segmentaion position image, establish parallel deep neural network model, according to the parallel deep neural network model of segmentaion position image training, obtain the model training module of trained parallel deep neural network model, feature extraction is carried out to segmentaion position image according to trained parallel deep neural network model, there are the characteristic extracting modules at the position of hot spot for detection, and classify according to there are the positions of hot spot to segmentaion position image, and distinguishing mark goes out the categorization module at hot-zone position and the position Chang Qu on bone scanning image;The present invention, which solves the existing bone scanning image of processing by hand, leads to highlight indefinite, standard disunity, there is a problem of error and processing speed is compared with slow, accuracy is lower.
Description
Technical field
The invention belongs to bone scanning technical field of image processing, are related to a kind of bone scanning based on parallel deep neural network
Image processing method and system.
Background technique
Bone scanning is a kind of nucleus medical image check of generalized bone, and with radioactive imaging instrument is detected, (such as γ shines
Camera, ECT) detection whole body bone increased radioactivity situation, obtain bone scanning image.For a long time, nuclear medicine doctor is to bone
Before scan image is made explanations, it will usually adjust the brightness and contrast of Whole body bone scan image so as to clearly identification figure
Picture, the professional standing and experience for being then based on doctor oneself explains bone scanning image, and writes audit report.
This artificial treatment image, artificial diagosis and the mode for writing audit report by hand, there are many defects and deficiencies.
For example, everyone carries out brightness and contrast's to image due to needing the brightness and contrast to image to carry out manual adjustment
Adjustment dynamics might have institute's difference, cause image abnormity part to highlight indefinite, and then doctor is caused to ignore certain abnormal portions
Point, do the explanation to make mistake;There are significant differences for parsing due to different diagosis people to image, different for same image
Doctor also has different explanations, and there are the skimble-scamble problems of standard, causes testing result error is larger, accuracy is lower to ask
Topic;And artificial treatment image, artificial diagosis and write testing result and need to expend a large amount of energy and time of doctor, increase doctor
Raw workload, meanwhile, patient is also required to wait the very long time, could obtain and know testing result, increases the coke of patient
Consider and painful.
Therefore, the present invention is in view of the above-mentioned problems, propose at a kind of bone scanning image based on parallel deep neural network
Manage method and system.
Summary of the invention
It is an object of the invention to: provide a kind of bone scanning image processing method based on parallel deep neural network and
System, solving the existing bone scanning image of processing by hand causes to highlight indefinite, and standard disunity, there are error and processing speeds
The problem lower compared with slow, accuracy.
The technical solution adopted by the invention is as follows:
A kind of bone scanning image processing method based on parallel deep neural network, comprising the following steps:
Step 1: importing bone scanning image, the bone scanning image includes predecessor's image and rear body image;
Step 2: pretreatment and image segmentation being carried out to bone scanning image, obtain the segmentaion position image of bone;
Step 3: parallel deep neural network model is established, according to the parallel deep neural network mould of segmentaion position image training
Type obtains trained parallel deep neural network model;
Step 4: feature being carried out according to segmentaion position image of the trained parallel deep neural network model to step 2 and is mentioned
It takes, obtains the position there are hot spot;
Step 5: according to step 4 extract there are the positions of hot spot to classify to segmentaion position image, and swept in bone
Tracing goes out hot-zone position and the position Chang Qu as upper distinguishing mark.
Further, the step 1 includes:
Step 1.1: obtaining the bone scanning figure of DICOM format with detecting after radioactive imaging instrument detects whole body bone
Picture, i.e. predecessor's image and rear body image;
Step 1.2: bone scanning image is imported in the form of image array.
Further, the step 2 includes:
Step 2.1: respectively to predecessor's image and rear body image successively degree of comparing adjusting, automatic threshold segmentation, close fortune
Calculation, median filtering, first time opening operation, mask operation, Gaussian Blur, equalization, picture cutting, simple threshold values segmentation and second
Secondary opening operation improves the quality of image;
Step 2.2: key point is carried out to pretreated predecessor's image and rear body image based on dissection partitioning algorithm respectively
Positioning and image segmentation, obtain several segmentaion position images after segmentation.
Further, the step 3 includes:
Step 3.1: according to the quantity of the segmentaion position image of step 2, establishing the single depth nerve comprising corresponding number
The parallel deep neural network model of network;
Step 3.2: several segmentaion position images are marked in doctor, mark the position there are hot spot;
Step 3.3: by several segmentaion position images marked while inputting parallel deep neural network model progress
Training, obtains trained parallel deep neural network model.
Further, the step 5 includes:
Step 5.1: according to step 4 extract there are the positions of hot spot to classify to all segmentaion position images, obtain
Position image set is cut in normal differentiation to there are the hot-zone segmentaion position image set of hot spot and not hot spot;
Step 5.2: the figure that hot-zone segmentaion position image set and normal differentiation cut position image set is converged for whole bone scanning
Image, and distinguishing mark goes out hot-zone position and the position Chang Qu in whole bone scanning image.
A kind of bone scanning image processing system based on parallel deep neural network, including at image collection module, image
Manage module, model training module, characteristic extracting module and categorization module;
Described image obtains module for importing bone scanning image, and the bone scanning image includes predecessor's image and rear body figure
Picture;
Described image processing module is used to carry out bone scanning image pretreatment and image segmentation, obtains the cutting part of bone
Bit image;
The model training module is parallel according to the training of segmentaion position image for establishing parallel deep neural network model
Deep neural network model obtains trained parallel deep neural network model;
The characteristic extracting module is for obtaining image processing module according to trained parallel deep neural network model
The segmentaion position image arrived carries out feature extraction, obtains the position there are hot spot;
The categorization module is used to classify to segmentaion position image according to there are the positions of hot spot, and in bone scanning figure
As upper distinguishing mark goes out hot-zone position and the position Chang Qu.
Further, importing bone scanning image in described image acquisition module includes:
The bone scanning image of DICOM format, i.e. predecessor are obtained with detecting after radioactive imaging instrument detects whole body bone
Image and rear body image;Bone scanning image is imported into image collection module in the form of image array again.
Further, in described image processing module, carrying out pretreatment to bone scanning image includes: respectively to predecessor's image
Successively degree of comparing adjusting, automatic threshold segmentation, closed operation, median filtering, first time opening operation, mask are transported with rear body image
Calculation, Gaussian Blur, equalization, picture is cut, simple threshold values divide and second of opening operation;
Carrying out image segmentation to bone scanning image includes: to be based on dissecting partitioning algorithm to pretreated predecessor's image with after
Body image carries out crucial point location and image segmentation respectively, several segmentaion position images are obtained after segmentation.
Further, parallel deep neural network model is established in the model training module, according to segmentaion position image
The parallel deep neural network model of training includes:
According to the quantity of the segmentaion position image of image processing module, the single depth nerve net comprising corresponding number is established
The parallel deep neural network model of network;Segmentaion position image is marked, the position there are hot spot is marked;It again will label
Good segmentaion position image inputs parallel deep neural network model simultaneously and is trained.
Further, classify according to there are the positions of hot spot to segmentaion position image in the categorization module, in bone
Distinguishing mark goes out hot-zone position and the position Chang Qu and includes: on scan image
Classify according to there are the positions of hot spot to all segmentaion position images, obtains the hot-zone cutting part there are hot spot
Position image set is cut in the normal differentiation of bitmap image set and not hot spot;Hot-zone segmentaion position image set and often differentiation are cut into station diagram again
The figure of image set converges for whole bone scanning image, and distinguishing mark goes out hot-zone position and normal portion, area in whole bone scanning image
Position.
In conclusion by adopting the above-described technical solution, the beneficial effects of the present invention are:
1. a kind of bone scanning image processing method and system based on parallel deep neural network, utilizes depth learning technology
And image processing techniques establishes parallel deep neural network model after the bone scanning image of acquisition is pre-processed and divided,
Parallel deep neural network model is trained further according to segmentaion position image, and according to trained parallel depth nerve net
Network model carries out feature extraction to segmentaion position image, detection obtain highlighting significantly, there are the positions of hot spot, finally right again
Segmentaion position image is classified, and distinguishing mark goes out hot-zone position and the position Chang Qu on bone scanning image;Multiple parallel nets
Network carries out simultaneously, more many fastly than artificial treatment image, improves the speed of bone scanning image procossing;And by terminal device it is unified into
Row image procossing and feature extraction reduce the error of artificial treatment, increase the accuracy of bone scanning image procossing.
2. using the present invention bone scanning image is handled, on bone scanning image distinguishing mark go out hot-zone position and often
Behind area position, doctor only needs to diagnose the hot-zone segmentaion position of bone scanning image, does not need the normal area for diagnosing bone scanning image again
Segmentaion position improves the speed of doctor's diagosis, reduces the workload of doctor.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings, in which:
Fig. 1 is a kind of flow chart of bone scanning image processing method based on parallel deep neural network;
Fig. 2 is the process demonstration graph that the present invention handles bone scanning image;
Fig. 3 is pretreated flow chart in step 2 of the present invention;
Fig. 4 is the predecessor's image obtained in the embodiment of the present invention one and rear body image;
Fig. 5 is predecessor's image in the embodiment of the present invention one after equalization processing;
Fig. 6 is predecessor's image after the completion of pre-processing in the embodiment of the present invention one;
Fig. 7 is the key that predecessor's image point location and partitioning layout figure in the embodiment of the present invention one;
Fig. 8 is the segmentaion position image in the embodiment of the present invention one after predecessor's image segmentation.
Specific embodiment
In order to be clearer and more clear the objectives, technical solutions, and advantages of the present invention, with reference to the accompanying drawings and embodiments,
The present invention will be described in further detail.It should be appreciated that described herein, specific examples are only used to explain the present invention, and
It is not used in the restriction present invention, i.e., described embodiment is a part of the embodiments of the present invention, instead of all the embodiments.
The component of the embodiment of the present invention being usually described herein as and shown in the accompanying drawings can with a variety of different configurations arranging and
Design.
Therefore, the detailed description of the embodiment of the present invention provided in the accompanying drawings is not intended to limit below claimed
The scope of the present invention, but be merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art
Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
It should be noted that the relational terms of term " first " and " second " or the like be used merely to an entity or
Operation is distinguished with another entity or operation, and without necessarily requiring or implying between these entities or operation, there are any
This actual relationship or sequence.Moreover, the terms "include", "comprise" or its any other variant be intended to it is non-exclusive
Property include so that include a series of elements process, method, article or equipment not only include those elements, but also
Further include other elements that are not explicitly listed, or further include for this process, method, article or equipment it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described
There is also other identical elements in the process, method, article or equipment of element.
A kind of bone scanning image processing method and system based on parallel deep neural network solves existing processing by hand
Bone scanning image leads to highlight indefinite, standard disunity, there is a problem of error and processing speed is compared with slow, accuracy is lower.
A kind of bone scanning image processing method based on parallel deep neural network, comprising the following steps:
Step 1: importing bone scanning image, the bone scanning image includes predecessor's image and rear body image;
Step 2: pretreatment and image segmentation being carried out to bone scanning image, obtain the segmentaion position image of bone;
Step 3: parallel deep neural network model is established, according to the parallel deep neural network mould of segmentaion position image training
Type obtains trained parallel deep neural network model;
Step 4: feature being carried out according to segmentaion position image of the trained parallel deep neural network model to step 2 and is mentioned
It takes, obtains the position there are hot spot;
Step 5: according to step 4 extract there are the positions of hot spot to classify to segmentaion position image, and swept in bone
Tracing goes out hot-zone position and the position Chang Qu as upper distinguishing mark.
A kind of bone scanning image processing system based on parallel deep neural network, including at image collection module, image
Manage module, model training module, characteristic extracting module and categorization module;
Described image obtains module for importing bone scanning image, and the bone scanning image includes predecessor's image and rear body figure
Picture;
Described image processing module is used to carry out bone scanning image pretreatment and image segmentation, obtains the cutting part of bone
Bit image;
The model training module is parallel according to the training of segmentaion position image for establishing parallel deep neural network model
Deep neural network model obtains trained parallel deep neural network model;
The characteristic extracting module is for obtaining image processing module according to trained parallel deep neural network model
The segmentaion position image arrived carries out feature extraction, obtains the position there are hot spot;
The categorization module is used to classify to segmentaion position image according to there are the positions of hot spot, and in bone scanning figure
As upper distinguishing mark goes out hot-zone position and the position Chang Qu.
The present invention utilizes depth learning technology and image processing techniques, and the bone scanning image of acquisition is pre-processed and divided
After cutting, parallel deep neural network model is established, parallel deep neural network model is instructed further according to segmentaion position image
Practice, and feature extraction is carried out to segmentaion position image according to trained parallel deep neural network model, detection is highlighted
Significantly, there are the positions of hot spot, finally classify again to segmentaion position image, and the distinguishing mark on bone scanning image
Hot-zone position and the position Chang Qu out;Image is detected using parallel deep neural network model, multiple parallel networks are simultaneously
It carries out, it is more many fastly than artificial treatment image, improve the speed of bone scanning image procossing;And image is uniformly carried out by terminal device
Processing and feature extraction reduce artificial treatment error, increase bone scanning image procossing accuracy, also reduce the work of doctor
Amount.
Feature and performance of the invention are described in further detail below with reference to embodiment.
Embodiment one
Presently preferred embodiments of the present invention provides a kind of bone scanning image processing method based on parallel deep neural network
Method includes the following steps: as shown in Figure 1 and Figure 2
Step 1: importing bone scanning image, the bone scanning image includes predecessor's image and rear body image;
Step 1.1: obtaining the bone scanning figure of DICOM format with detecting after radioactive imaging instrument detects whole body bone
Picture, i.e. predecessor's image and rear body image, as shown in Figure 4;
Step 1.2: bone scanning image is imported in the form of image array;
Step 2: pretreatment and image segmentation being carried out to bone scanning image, obtain the segmentaion position image of bone;
Step 2.1: predecessor's image and rear body image successively being pre-processed respectively, described pretreated process such as Fig. 3
It is shown, successively to predecessor's image and rear body image degree of comparing adjusting, automatic threshold segmentation, closed operation, median filtering, first
Secondary opening operation, mask operation, Gaussian Blur, equalization, picture are cut, simple threshold values segmentation and second of opening operation, raising are schemed
The quality of picture is prepared for following model is trained with image characteristics extraction;
Step 2.1.1: respectively being adjusted predecessor's image and rear body image degree of comparing, and can be put down by adjusting contrast
Equal pixel value, the problem for avoiding the pixel of parts of images lower;
Step 2.1.2: carrying out automatic threshold segmentation, eliminates other than Bone and soft tissue, the lower noise of pixel value, this
The pixel value of a little noises is about between 0 to 4;
Step 2.1.3: carrying out closed operation, fills up the pixel of vacancy inside entire skeleton;
Step 2.1.4: median filtering is carried out, isolated noise spot is eliminated with smoothed image, is blurred image not;
Step 2.1.5: carrying out first time opening operation, eliminates the burr pixel at human skeleton edge;
Step 2.1.6: carrying out mask operation, obtains the foreground picture of bone scanning image;
Step 2.1.7: Gaussian Blur is carried out to foreground picture, it is ensured that pixel value is widely distributed, prepares for equalization;
Step 2.1.8: carrying out equalization processing, skeleton and soft tissue is obtained, before being illustrated in figure 5 after equalization processing
Body image;
Step 2.1.9: carrying out picture cutting, prepares for the crucial point location in image segmentation;
Step 2.1.10: carrying out simple threshold values segmentation, inhibits soft tissue, to obtain skeleton;
Step 2.1.11: carrying out second of opening operation, eliminates isolated pixel point, eliminates some hairs generated after Threshold segmentation
Noise is pierced, pretreatment is completed, and pretreated predecessor's image and rear body image are obtained, and is illustrated in figure 6 after the completion of pretreatment
Predecessor's image;
Step 2.2: key point is carried out to pretreated predecessor's image and rear body image based on dissection partitioning algorithm respectively
Positioning and image segmentation, are illustrated in figure 7 the crucial point location and partitioning layout figure of predecessor's image, 13 are respectively obtained after segmentation
Segmentaion position image amounts to 26 segmentaion position images;13 segmentaion position images are respectively head, left shoulder, right shoulder, a left side
Elbow, right hand elbow, left chest, right chest, backbone, pelvis, left knee, right knee, left ankle and right ankle image, before being illustrated in figure 8
The 13 segmentaion position images obtained after body image segmentation;
Step 3: parallel deep neural network model is established, according to the parallel deep neural network mould of segmentaion position image training
Type obtains trained parallel deep neural network model;
Step 3.1: according to the 26 of step 2 segmentaion position images, establishing includes 26 single deep neural networks
The parallel deep neural network model of ResNet-50;
Step 3.2: 26 segmentaion position images are marked in doctor, mark the position there are hot spot;
Step 3.3: by marked 26 segmentaion position images while inputting parallel deep neural network model and instruct
Practice, obtains trained parallel deep neural network model;
Step 4: feature being carried out according to segmentaion position image of the trained parallel deep neural network model to step 2 and is mentioned
It takes, obtains the position there are hot spot;
Step 4.1: carrying out feature simultaneously using 26 segmentaion position images of the parallel deep neural network model to step 2
It extracts, detects the position there are hot spot, the detection response time of the parallel deep neural network model is Millisecond;
Step 5: according to step 4 extract there are the positions of hot spot to classify to segmentaion position image, and swept in bone
Tracing goes out hot-zone position and the position Chang Qu as upper distinguishing mark.
Step 5.1: according to step 4 extract there are the positions of hot spot to classify to 26 segmentaion position images, obtain
Position image set is cut in normal differentiation to there are the hot-zone segmentaion position image set of hot spot and not hot spot;
Step 5.2: the figure that hot-zone segmentaion position image set and normal differentiation cut position image set is converged for whole bone scanning
Image, and distinguishing mark goes out hot-zone position and the position Chang Qu in whole bone scanning image.
Further, then by doctor according to the bone scanning diagnostic imaging hot-zone segmentaion position marked it whether there is lesion,
Meanwhile it not needing to diagnose normal area's segmentaion position again.
Further, the accuracy that the present embodiment is assessed using ten folding cross validation methods, by the random equal part of data set
Be ten parts, wherein nine parts be used as training set, portion be used as test set, repeat ten times, can be obtained one it is relatively reliable and stable
Performance evaluation result using true negative and true positives as the index of performance evaluation, and obtains experimental data as shown in Table 1.
Table 1
It can be obtained by table 1 and repeatedly after experiment, the highest true negative and highest true positives in 26 segmentaion position images point
Not Wei 99.9% and 97.3%, the average true negatives of all segmentaion position images peace true positives are respectively 99.2% He
71.8%.Thus, it can be known that the accuracy in the present embodiment application specific experiment is higher.
It, can be with if the present embodiment is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, the present invention realizes in above-described embodiment method
All or part of the process can also instruct relevant hardware to complete by computer program, and the computer program can
It is stored in a computer readable storage medium, the computer program is when being executed by processor, it can be achieved that above-mentioned each method
The step of embodiment.Wherein, the computer program includes computer program code, and the computer program code can be source
Code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium may include:
Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disk, CD, meter of the computer program code can be carried
Calculation machine memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access
Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that the computer-readable medium
The content for including can carry out increase and decrease appropriate according to the requirement made laws in jurisdiction with patent practice, such as in certain departments
Method administrative area does not include electric carrier signal and telecommunication signal according to legislation and patent practice, computer-readable medium.
The present embodiment utilize depth learning technology and image processing techniques, to the bone scanning image of acquisition carry out pretreatment and
After segmentation, parallel deep neural network model is established, parallel deep neural network model is carried out further according to segmentaion position image
Training, and feature extraction is carried out to segmentaion position image according to trained parallel deep neural network model, detection obtains convex
It is aobvious significantly, there are the positions of hot spot, finally classify again to segmentaion position image, and mark is distinguished on bone scanning image
Remember hot-zone position out and the position Chang Qu;The present invention detects image using parallel deep neural network model, multiple parallel
Network carries out simultaneously, more many fastly than artificial treatment image, improves the speed of bone scanning image procossing;And it is unified by terminal device
Image procossing and feature extraction are carried out, reduces the error of artificial treatment, increases the accuracy of bone scanning image procossing;Using
The present embodiment handles bone scanning image, after distinguishing mark goes out hot-zone position and the position Chang Qu on bone scanning image, doctor
The raw hot-zone segmentaion position for only needing to diagnose bone scanning image, does not need the normal area's segmentaion position for diagnosing bone scanning image again, mentions
The high speed of doctor's diagosis, reduces the workload of doctor.
Embodiment two
Present embodiments provide a kind of bone scanning image processing terminal equipment based on parallel deep neural network, comprising:
Processor, memory and storage in the memory and the computer program that can run on the processor, such as one
The program of kind of the bone scanning image processing method based on parallel deep neural network, illustratively, the computer program can be with
It is divided into one or more modules, one or more of modules are stored in the memory, and by the processing
Device executes, to complete the present invention.One or more of modules can be the series of computation machine journey that can complete specific function
Sequence instruction segment, the instruction segment is for describing the computer program in the bone scanning image based on parallel deep neural network
Implementation procedure in processing terminal equipment.For example, the computer program can be divided into image collection module, image procossing
Module, model training module, characteristic extracting module and categorization module, each module concrete function are as follows:
Described image obtains module for importing bone scanning image, and the bone scanning image includes predecessor's image and rear body figure
Picture;
Described image processing module is used to carry out bone scanning image pretreatment and image segmentation, obtains the cutting part of bone
Bit image;
The model training module is parallel according to the training of segmentaion position image for establishing parallel deep neural network model
Deep neural network model obtains trained parallel deep neural network model;
The characteristic extracting module is for obtaining image processing module according to trained parallel deep neural network model
The segmentaion position image arrived carries out feature extraction, obtains the position there are hot spot;
The categorization module is used to classify to segmentaion position image according to there are the positions of hot spot, and in bone scanning figure
As upper distinguishing mark goes out hot-zone position and the position Chang Qu.
Further, importing bone scanning image in described image acquisition module includes:
The bone scanning image of DICOM format, i.e. predecessor are obtained with detecting after radioactive imaging instrument detects whole body bone
Image and rear body image;Bone scanning image is imported into image collection module in the form of image array again.
Further, in described image processing module, carrying out pretreatment to bone scanning image includes: respectively to predecessor's image
Successively degree of comparing adjusting, automatic threshold segmentation, closed operation, median filtering, first time opening operation, mask are transported with rear body image
Calculation, Gaussian Blur, equalization, picture is cut, simple threshold values divide and second of opening operation;
Carrying out image segmentation to bone scanning image includes: to be based on dissecting partitioning algorithm to pretreated predecessor's image with after
Body image carries out crucial point location and image segmentation respectively, several segmentaion position images are obtained after segmentation.
Further, parallel deep neural network model is established in the model training module, according to segmentaion position image
The parallel deep neural network model of training includes:
According to the quantity of the segmentaion position image of image processing module, the single depth nerve net comprising corresponding number is established
The parallel deep neural network model of network;Segmentaion position image is marked, the position there are hot spot is marked;It again will label
Good segmentaion position image inputs parallel deep neural network model simultaneously and is trained.
Further, classify according to there are the positions of hot spot to segmentaion position image in the categorization module, in bone
Distinguishing mark goes out hot-zone position and the position Chang Qu and includes: on scan image
Classify according to there are the positions of hot spot to all segmentaion position images, obtains the hot-zone cutting part there are hot spot
Position image set is cut in the normal differentiation of bitmap image set and not hot spot;Hot-zone segmentaion position image set and often differentiation are cut into station diagram again
The figure of image set converges for whole bone scanning image, and distinguishing mark goes out hot-zone position and normal portion, area in whole bone scanning image
Position.
The present embodiment can be based on desktop by the bone scanning image processing terminal equipment of parallel deep neural network
Calculation machine, notebook, palm PC and cloud server etc. calculate equipment, the bone scanning figure based on parallel deep neural network
As processing terminal equipment may include, but it is not limited only to, processor, memory.It will be understood by those skilled in the art that the present embodiment
Only one of example, does not constitute the limit to the bone scanning image processing terminal equipment based on parallel deep neural network
It is fixed, it may include perhaps combining certain components or different components, such as the base than illustrating more or fewer components
It can also be set including input-output equipment, network insertion in the bone scanning image processing terminal equipment of parallel deep neural network
Standby, bus etc..
The processor can be central processing unit (Central Processing Unit, CPU), can also be it
His general processor, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng, the processor is the control centre of the bone scanning image processing terminal equipment based on parallel deep neural network, benefit
With the various pieces of various interfaces and the entire bone scanning image processing terminal equipment of connection.
The memory can be used for storing the computer program and/or module, and the processor is by operation or executes
Computer program in the memory and/or module are stored, and calls the data being stored in memory, described in realization
The various functions of bone scanning image processing terminal equipment based on parallel deep neural network.The memory can mainly include depositing
Store up program area and storage data area, wherein storing program area can application program needed for storage program area, at least one function
(such as sound-playing function, image player function etc.) etc.;Storage data area can be stored to be created according to using for terminal device
Data (such as audio data, phone directory etc.) etc..It, can be in addition, memory may include high-speed random access memory
Including nonvolatile memory, such as hard disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC),
Secure digital (Secure Digital, SD) card, flash card (Flash Card), at least one disk memory, flash memories
Part or other volatile solid-state parts.
It should be noted that since Figure of description must not colour and alter, so distinguishing apparent place in the present invention
Compare and be difficult to show, if necessary, can provide color image.
The foregoing is merely illustrative of the preferred embodiments of the present invention, the protection scope being not intended to limit the invention, any
Those skilled in the art within the spirit and principles in the present invention made by any modifications, equivalent replacements, and improvements etc.,
It should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of bone scanning image processing method based on parallel deep neural network, comprising the following steps:
Step 1: importing bone scanning image, the bone scanning image includes predecessor's image and rear body image;
Step 2: pretreatment and image segmentation being carried out to bone scanning image, obtain the segmentaion position image of bone;
Step 3: parallel deep neural network model is established, parallel deep neural network model is trained according to segmentaion position image,
Obtain trained parallel deep neural network model;
Step 4: feature extraction is carried out according to segmentaion position image of the trained parallel deep neural network model to step 2,
Obtain the position there are hot spot;
Step 5: according to step 4 extract there are the positions of hot spot to classify to segmentaion position image, and in bone scanning figure
As upper distinguishing mark goes out hot-zone position and the position Chang Qu.
2. a kind of bone scanning image processing method based on parallel deep neural network according to claim 1, feature
It is, the step 1 includes:
Step 1.1: obtaining the bone scanning image of DICOM format with detecting after radioactive imaging instrument detects whole body bone, i.e.,
Predecessor's image and rear body image;
Step 1.2: bone scanning image is imported in the form of image array.
3. a kind of bone scanning image processing method based on parallel deep neural network according to claim 1, feature
It is, the step 2 includes:
Step 2.1: respectively to predecessor's image and rear body image successively degree of comparing adjusting, automatic threshold segmentation, closed operation, in
Value filtering, first time opening operation, mask operation, Gaussian Blur, equalization, picture are cut, simple threshold values are divided and opened for the second time
Operation improves the quality of image;
Step 2.2: crucial point location is carried out to pretreated predecessor's image and rear body image based on dissection partitioning algorithm respectively
And image segmentation, several segmentaion position images are obtained after segmentation.
4. a kind of bone scanning image processing method based on parallel deep neural network according to claim 1, feature
It is, the step 3 includes:
Step 3.1: according to the quantity of the segmentaion position image of step 2, establishing the single deep neural network comprising corresponding number
Parallel deep neural network model;
Step 3.2: several segmentaion position images are marked in doctor, mark the position there are hot spot;
Step 3.3: by several segmentaion position images marked while inputting parallel deep neural network model and be trained,
Obtain trained parallel deep neural network model.
5. a kind of bone scanning image processing method based on parallel deep neural network according to claim 1, feature
It is, the step 5 includes:
Step 5.1: according to step 4 extract there are the positions of hot spot to classify to all segmentaion position images, deposited
Hot spot hot-zone segmentaion position image set and not hot spot it is normal differentiation cut position image set;
Step 5.2: the figure that by hot-zone segmentaion position image set and often position image set is cut in differentiation converges for whole bone scanning image,
And distinguishing mark goes out hot-zone position and the position Chang Qu in whole bone scanning image.
6. a kind of bone scanning image processing system based on parallel deep neural network, which is characterized in that obtain mould including image
Block, image processing module, model training module, characteristic extracting module and categorization module;
Described image obtains module for importing bone scanning image, and the bone scanning image includes predecessor's image and rear body image;
Described image processing module is used to carry out bone scanning image pretreatment and image segmentation, obtains the cutting part bitmap of bone
Picture;
The model training module is for establishing parallel deep neural network model, according to the parallel depth of segmentaion position image training
Neural network model obtains trained parallel deep neural network model;
What the characteristic extracting module was used to obtain image processing module according to trained parallel deep neural network model
Segmentaion position image carries out feature extraction, obtains the position there are hot spot;
The categorization module is used to classify to segmentaion position image according to there are the positions of hot spot, and on bone scanning image
Distinguishing mark goes out hot-zone position and the position Chang Qu.
7. a kind of bone scanning image processing system based on parallel deep neural network according to claim 6, feature
It is, described image obtains importing bone scanning image in module and includes:
The bone scanning image of DICOM format, i.e. predecessor's image are obtained with detecting after radioactive imaging instrument detects whole body bone
With rear body image;Bone scanning image is imported into image collection module in the form of image array again.
8. a kind of bone scanning image processing system based on parallel deep neural network according to claim 6, feature
Be, in described image processing module, to bone scanning image carry out pretreatment include: respectively to predecessor's image and rear body image according to
Secondary degree of comparing adjusting, automatic threshold segmentation, closed operation, median filtering, first time opening operation, mask operation, Gaussian Blur,
Equalization, picture are cut, simple threshold values divide and second of opening operation;
Carrying out image segmentation to bone scanning image includes: based on dissection partitioning algorithm to pretreated predecessor's image and rear body figure
As carrying out crucial point location and image segmentation respectively, several segmentaion position images are obtained after segmentation.
9. a kind of bone scanning image processing system based on parallel deep neural network according to claim 6, feature
It is, parallel deep neural network model is established in the model training module, according to the parallel depth of segmentaion position image training
Neural network model includes:
According to the quantity of the segmentaion position image of image processing module, the single deep neural network comprising corresponding number is established
Parallel deep neural network model;Segmentaion position image is marked, the position there are hot spot is marked;It will mark again
Segmentaion position image inputs parallel deep neural network model simultaneously and is trained.
10. a kind of bone scanning image processing system based on parallel deep neural network according to claim 6, feature
It is, classifies according to there are the positions of hot spot to segmentaion position image in the categorization module, the area on bone scanning image
Not marking hot-zone position and the position Chang Qu includes:
Classify according to there are the positions of hot spot to all segmentaion position images, obtains the hot-zone cutting part bitmap there are hot spot
Position image set is cut in the normal differentiation of image set and not hot spot;Hot-zone segmentaion position image set and often differentiation are cut into position image set again
Figure converge for whole bone scanning image, and distinguishing mark goes out hot-zone position and the position Chang Qu in whole bone scanning image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910720428.6A CN110443792B (en) | 2019-08-06 | 2019-08-06 | Bone scanning image processing method and system based on parallel deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910720428.6A CN110443792B (en) | 2019-08-06 | 2019-08-06 | Bone scanning image processing method and system based on parallel deep neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110443792A true CN110443792A (en) | 2019-11-12 |
CN110443792B CN110443792B (en) | 2023-08-29 |
Family
ID=68433413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910720428.6A Active CN110443792B (en) | 2019-08-06 | 2019-08-06 | Bone scanning image processing method and system based on parallel deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110443792B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462060A (en) * | 2020-03-24 | 2020-07-28 | 湖南大学 | Method and device for detecting standard section image in fetal ultrasonic image |
CN111539963A (en) * | 2020-04-01 | 2020-08-14 | 上海交通大学 | Bone scan image hot spot segmentation method, system, medium and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102096804A (en) * | 2010-12-08 | 2011-06-15 | 上海交通大学 | Method for recognizing image of carcinoma bone metastasis in bone scan |
US20130094704A1 (en) * | 2007-12-28 | 2013-04-18 | Exini Diagnostics Ab | System for detecting bone cancer metastases |
-
2019
- 2019-08-06 CN CN201910720428.6A patent/CN110443792B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130094704A1 (en) * | 2007-12-28 | 2013-04-18 | Exini Diagnostics Ab | System for detecting bone cancer metastases |
CN102096804A (en) * | 2010-12-08 | 2011-06-15 | 上海交通大学 | Method for recognizing image of carcinoma bone metastasis in bone scan |
Non-Patent Citations (1)
Title |
---|
徐磊等: "基于高斯混合模型和核密度估计的全身骨骼SPECT图像分割算法研究", 《中国医疗设备》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462060A (en) * | 2020-03-24 | 2020-07-28 | 湖南大学 | Method and device for detecting standard section image in fetal ultrasonic image |
CN111539963A (en) * | 2020-04-01 | 2020-08-14 | 上海交通大学 | Bone scan image hot spot segmentation method, system, medium and device |
CN111539963B (en) * | 2020-04-01 | 2022-07-15 | 上海交通大学 | Bone scanning image hot spot segmentation method, system, medium and device |
Also Published As
Publication number | Publication date |
---|---|
CN110443792B (en) | 2023-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10929708B2 (en) | Deep learning network for salient region identification in images | |
KR101901397B1 (en) | SMI automatic analysis method of hand-wrist radiation images using deep learning | |
CN110136809A (en) | A kind of medical image processing method, device, electromedical equipment and storage medium | |
CN105917353B (en) | Feature extraction and matching for biological identification and template renewal | |
CN106934376B (en) | A kind of image-recognizing method, device and mobile terminal | |
CN108665456A (en) | The method and system that breast ultrasound focal area based on artificial intelligence marks in real time | |
CN109003672A (en) | A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning | |
CN110472616A (en) | Image-recognizing method, device, computer equipment and storage medium | |
CN108197557A (en) | Testimony of a witness consistency check method, terminal device and computer readable storage medium | |
CN106780475A (en) | A kind of image processing method and device based on histopathologic slide's image organizational region | |
CN107229952A (en) | The recognition methods of image and device | |
CN109087296A (en) | A method of extracting human region in CT image | |
CN110443792A (en) | A kind of bone scanning image processing method and system based on parallel deep neural network | |
CN109919912A (en) | A kind of quality evaluating method and device of medical image | |
CN114332938A (en) | Pet nose print recognition management method and device, intelligent equipment and storage medium | |
CN111353978B (en) | Method and device for identifying heart anatomy structure | |
CN112579808A (en) | Data annotation processing method, device and system | |
JP2005084980A (en) | Data generation unit for card with face image, method and program | |
CN114140465B (en) | Self-adaptive learning method and system based on cervical cell slice image | |
US20220319208A1 (en) | Method and apparatus for obtaining feature of duct tissue based on computer vision, and intelligent microscope | |
CN110110622A (en) | A kind of medical Method for text detection, system and storage medium based on image procossing | |
CN110473176A (en) | Image processing method and device, method for processing fundus images, electronic equipment | |
CN113537408A (en) | Ultrasonic image processing method, device and equipment and storage medium | |
CN110910409B (en) | Gray image processing method, device and computer readable storage medium | |
CN206363347U (en) | Based on Corner Detection and the medicine identifying system that matches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |