CN110084796A - A kind of analysis method of complex texture CT image - Google Patents

A kind of analysis method of complex texture CT image Download PDF

Info

Publication number
CN110084796A
CN110084796A CN201910334644.7A CN201910334644A CN110084796A CN 110084796 A CN110084796 A CN 110084796A CN 201910334644 A CN201910334644 A CN 201910334644A CN 110084796 A CN110084796 A CN 110084796A
Authority
CN
China
Prior art keywords
image
cnn
layer
rate
net
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910334644.7A
Other languages
Chinese (zh)
Other versions
CN110084796B (en
Inventor
李晓峰
叶晨
胡延军
李曾
姚标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Li Xiaofeng
Xuzhou Cancer Hospital
Original Assignee
Xuzhou Yun Lian Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou Yun Lian Medical Technology Co Ltd filed Critical Xuzhou Yun Lian Medical Technology Co Ltd
Priority to CN201910334644.7A priority Critical patent/CN110084796B/en
Publication of CN110084796A publication Critical patent/CN110084796A/en
Application granted granted Critical
Publication of CN110084796B publication Critical patent/CN110084796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of analysis methods of complex texture CT image comprising, the dividing processing of original CT image;Analyze the image Jing Guo the dividing processing;Analyze original CT image;Image co-registration.The present invention proposes that DenseU-Net for accurately dividing thyroid gland enhanced CT image, realizes the accurate segmentation of thyroid gland enhanced CT image.Further, the present invention also proposes the pernicious detection method CNN-F of new Benign Thyroid Nodules, the fusion of a variety of level characteristics is realized using transfer learning and by merging two different CNN structures, the present invention is not necessarily to carry out thyroid gland enhanced CT image any complicated pretreatment, and the accurate judgement to CT image is realized in the case where no any user intervention, accuracy of judgement degree is up to 95.73%.

Description

A kind of analysis method of complex texture CT image
Technical field
The invention belongs to image analysis technology fields, and in particular to a kind of analysis method of complex texture CT image.
Background technique
A large number of researchers have done many researchs on computer support cooperative work direction in recent years, although these researchs exist Good achievement is achieved in ultrasound image processing, but due to CT image complexity height, especially thyroid gland CT image-region The features such as small, the above method are not particularly suited for analysis CT image.Identification side of the prior art based on CT image statistics textural characteristics Method only has 88% accuracy rate, and needs the artificial pretreatment and characteristic extraction step for participating in carrying out complexity, these factors All make algorithm be difficult to obtain further promotion, how effectively to extract feature and selected in numerous features and is suitable special Sign is two big difficult points.
Depth convolutional neural networks (ConvolutionalNeuralNetwork, CNN) are explored in medical domain, But general CNN is made of millions of a nodes and weight, and usually parameter amount is more, the performance of CNN is better, this meaning Only large data collection could support its train process.Due to being difficult to obtain the large data collection of high quality, CNN is in medicine The application in field encounters many restrictions.The artificial participation in CT image analysis process how is reduced, or even is not required to manually participate in, is had The extraction feature of effect and select suitable feature to improving the standard that CT image artificial intelligence automatically analyzes in numerous features Exactness is that this field has technical problem to be solved.
Summary of the invention
The purpose of this section is to summarize some aspects of the embodiment of the present invention and briefly introduce some preferable implementations Example.It may do a little simplified or be omitted to avoid our department is made in this section and the description of the application and the title of the invention Point, the purpose of abstract of description and denomination of invention it is fuzzy, and this simplification or omit and cannot be used for limiting the scope of the invention.
In view of above-mentioned technological deficiency, the present invention is proposed.
Therefore, as one aspect of the present invention, the present invention overcomes the deficiencies in the prior art, solves complicated line The technical issues of managing CT image analysis, provides a kind of analysis method of complex texture CT image.
In order to solve the above technical problems, the present invention provides the following technical scheme that a kind of analysis of complex texture CT image Method comprising, the dividing processing of original CT image;Analyze the image Jing Guo the dividing processing;Analyze original CT image;Figure As fusion.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: the original CT image Dividing processing, using DenseU-Net neural network framework structure, the DenseU-Net, encoder equation are as follows: xl=gl ([x0, x1..., xl-1])
It is described for BN-ReLU-Conv ((1 × 1) × (4*growth_rate))-BN-ReLU-Conv ((3 × 3) × (growth_rate)), wherein the growth_rate is set as 32,32,64,64,128 totally five layers, [the x0, x1..., xl-1] indicate the preceding l-1 layers of parallel connection exported.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: the DenseU- Net, decoder formula are as follows:
xl+n=fl+n([fl-n(xl-n)+xl-n, xl+n])
Wherein fl-nIndicate convolution operation, xl-nLayer of l-n.This ensure that the necessary information of original characteristic pattern, and energy Processing fl-nFilter useless interference characteristic.fl+nIndicate convolution operation.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: further including, CNN-F net Network structure, the CNN-F network structure are composed in parallel by CNN-1 network structure and CNN-2 network structure, the CNN-1 capture The data of the sophisticated feature of CT image entirety and surrounding tissue, the CNN-1 processing are untreated original CT image; Subtle rudimentary textural characteristics inside the CNN-2 capture test serum, the data of the CNN-2 processing are original CT image warp The DenseU-Net being crossed treated image data, the CNN-1 and CNN-2 merge feature by cascading layers, it Result is exported by full articulamentum afterwards.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: the CNN-1 includes Five intensive blocks share 63 layers of convolutional layer, and the activation primitive that convolutional layer uses is line rectification function, after each intensive block It can be the maximum pond layer processing that 2 × 2, stride is 2 by window size, characteristic pattern size reduces half after processing, in pond Application batch specification layer, described batch of specification layer standardize the activation value of preceding layer in each batch again after layer, i.e., so that The mean value of its output data is close to 0, and standard deviation is close to 1;There is one layer of levelling blanket after intensive block, connect layer entirely for one layer later, Node is 2, and the label of prediction is generated using softmax function.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: the intensive block, Growth_rate is set as 32.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: the CNN-2 includes Five intensive blocks share 31 layers of convolutional layer, and third is double-layer structure to the 5th intensive block in the CNN-2, and convolutional layer is adopted Activation primitive is line rectification function, can be the maximum that 2 × 2, stride is 2 by window size after each intensive block The processing of pond layer, characteristic pattern size reduces half after processing, and using batch specification layer after the layer of pond, described batch of specification layer is every The activation value of preceding layer is standardized again in a batch, i.e., so that the mean value of its output data is close to 0, standard deviation is close to 1; Having one layer of average pond layer after intensive block for the characteristic pattern size in each channel to be reduced to 1 is then levelling blanket, it Connect layer entirely for one layer afterwards, node 2 generates the label of prediction using softmax function.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: the intensive block, Growth_rate is set as 32.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: the DenseU- Net, training method include, using Dice loss function, formula are as follows:
Wherein pn、tnRespectively predict each pixel value of mask and true mask, between 0~1, ∈ is value Smoothing coefficient.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: the training method, Its optimization method is Adam, and the exponential decay rate that wherein learning rate of Adam is set as 0.0001, single order moments estimation is set as 0.9, the exponential decay rate of second order moments estimation is 0.999, and when training, data input network with two Zhang Weiyi batches.
A kind of preferred embodiment of analysis method as complex texture CT image of the present invention: it is adopted when training CNN-1 Objective function is that multiclass intersects entropy function, and optimization method Adam, wherein the learning rate of Adam is set as 0.001, single order The exponential decay rate that the exponential decay rate of moments estimation is set as 0.9, second order moments estimation is 0.999, is finely adjusted later to CNN-1 Stochastic gradient descent optimization algorithm of the Shi Caiyong with momentum, wherein the learning rate of SGD is set as 0.00001, and momentum parameter is 0.9;The objective function used when training CNN-2 is multiclass intersection entropy function, optimization method Adam, wherein the learning rate of Adam It is 0.999 that the exponential decay rate for being set as 0.001, single order moments estimation, which is set as the exponential decay rate of 0.9, second order moments estimation,;Instruction Practice the objective function that CNN-F is used and intersect entropy function for multiclass, optimization method is stochastic gradient descent optimization algorithm, and learning rate is set It is set to 0.00001.
Beneficial effects of the present invention: the present invention proposes that DenseU-Net divides thyroid gland enhanced CT image for accurate, in fact The accurate segmentation of thyroid gland enhanced CT image is showed.Further, the present invention also proposes the pernicious detection of new Benign Thyroid Nodules Method CNN-F realizes the fusion of a variety of level characteristics, this hair using transfer learning and by merging two different CNN structures It is bright to be pre-processed without carrying out any complexity to thyroid gland enhanced CT image, and realized in the case where no any user intervention To the accurate judgement of CT image, accuracy of judgement degree is up to 95.73%.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill of field, without any creative labor, it can also be obtained according to these attached drawings other Attached drawing.Wherein:
Fig. 1 is the present embodiment CT image flow diagram.
Fig. 2 is intensive block (dense block) structural schematic diagram.
Fig. 3 is the simple schematic diagram of CNN-F network structure of the present invention.
Fig. 4 is CNN-F network structure detailed maps of the present invention.
Fig. 5 is the Dice coefficient of traditional U-Net and DenseU-Net on training set with iteration wheel number change curve pair Than figure.
Fig. 6 is the segmentation figure generated by DenseU-Net.
Fig. 7 is to cover comparative test figure.
Fig. 8 is the ROC curve of different CNNs of the present invention for detecting thyroid nodule.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, right combined with specific embodiments below A specific embodiment of the invention is described in detail.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, but the present invention can be with Implemented using other than the one described here other way, those skilled in the art can be without prejudice to intension of the present invention In the case of do similar popularization, therefore the present invention is not limited by the specific embodiments disclosed below.
Secondly, " one embodiment " or " embodiment " referred to herein, which refers to, may be included at least one realization side of the invention A particular feature, structure, or characteristic in formula." in one embodiment " that different places occur in the present specification not refers both to The same embodiment, nor the individual or selective embodiment mutually exclusive with other embodiments.
Embodiment 1:
The thyroid gland enhanced CT image that the present invention uses is provided by hospital.These CT images are total from 398 patients 2012, wherein there are 73 patients is diagnosed as with pernicious thyroid tubercle, share 592 CT images, the first shape of 325 patients Gland tubercle be it is benign, share 1420 images.These image last diagnostics foundations are fine needle aspiration biopsy (FNA).
Fig. 1 is the present embodiment CT image analysis flow diagram.
The present invention develops DenseU-Net network architecture: including encoder and decoder two parts, encoder section It is made of multilayer convolutional layer, wherein the characteristic pattern size transmitted successively reduces, port number successively increases.In deep neural network In, the depth of network is a critically important parameter.In general, as network depth is deepened, the learning performance of network can be got over It is better to come, still, it is followed by that the problem of gradient disappears, leads to worse network training as a result, in order to solve this Technical problem, the present invention develop DenseU-Net encoder equation:
xl=gl([x0, x1..., xl-1])
Wherein, inventive algorithm glFor BN-ReLU-Conv ((1 × 1) × (4*growth_rate))-BN-ReLU-Conv The design of ((3 × 3) × (growth_rate)), [x0, x1..., xl-1] indicate the preceding l-1 layers of parallel connection exported.Inventive network Structure use three kinds of different dense block (intensive block) (intensive block schematic diagram is shown in Fig. 2), growth rate 32,32, 64,64,128 totally five layers, design is that encoder section depth is too deep in order to prevent in this way, and overall performance of network is caused to decline.This Invention research discovery, when growthrate is only set to 32, the dense block of encoder layer 5 will have 16 layers, this is remote Much larger than the number of plies of decoder section, encoder and the decoder number of plies are seriously uneven, influence overall performance of network.
Beneficial to this design, network is narrower, and parameter is less, while this connection type makes the transmitting of feature and gradient more Added with effect, network is also more easier to train.Simultaneously because parameter amount is fewer than general CNN very much, there is certain suppression for over-fitting Production is used, and certain regularization is played the role of.
Contextual information is directly connected with decoder in existing U-Net, although this connection type can be maximum Retain the contextual information of most original to help decoder to rebuild the profile and details of segmentation object, still, untreated spy For sign figure equally comprising the information outside a large amount of interference characteristic, such as segmentation object region, this will reduce the performance of network, to understand Certainly this problem develops DenseU-net decoder herein, and DenseU-net decoder of the present invention is indicated by following formula:
xl+n=fl+n([fl-n(xl-n)+xl-n, xl+n])
Wherein fl-nIndicate the operation such as convolution, xl-nIndicate that l-n is exported.This ensure that the necessary letter of original characteristic pattern Breath, and can be in fl-nUseless interference characteristic is filtered out under reason;Indicate fl+n, wait operation.
The present invention develops CNN-F network structure:
The present invention proposes CNN-F network structure (Fusion of CNN-1and CNN-2), CNN-F network structure of the present invention Schematic diagram is shown in Fig. 3, and CNN-F network carries out feature extraction using image of the Dense block structure to input, finally complete by one layer Articulamentum and softmax function are classified, and CNN-1 detailed construction is as shown in Figure 4.CNN-F network structure merged shallow-layer and Deep layer network, CNN-F are composed in parallel by CNN-1 and CNN-2, and two kinds of image datas input CNN-1 and CNN- by input terminal simultaneously 2, feature is merged by Concatenate layer (cascading layers) after two networks extract feature, finally by activating The full articulamentum output that function is softmax is as a result, CNN-F structure of the present invention captures the thyroid gland in CT image using CNN-1 The sophisticated feature of tubercle entirety and surrounding tissue captures rudimentary textural characteristics subtle inside thyroid gland using CNN-2, To learn multiple feature ranks from thyroid gland enhanced CT image.
Specifically, CNN-1 is gathered around, there are five Dense block (intensive blocks), and growth_rate is set as 32, totally 63 layers Convolutional layer, the activation primitive that all convolutional layers use is line rectification function (ReLu), also uses other types in CNN-1 Layer (including levelling blanket and full articulamentum), as shown in Figure 4.After each Denseblock can by window size be 2 × 2, the maximum pond layer that stride is 2 is handled, and characteristic pattern size reduces half after processing.In addition, all being answered after all pond layers With batch specification layer (BatchNormalization), this layer standardizes the activation value of preceding layer on each batch again, i.e., So that the mean value of its output data is close to 0, standard deviation is close to 1.Effect is acceleration convergence, control over-fitting, reduces network pair It initializes the sensibility of weight, allow using biggish learning rate.There are one layer Averagepooling layers after all convolutional layers (average pond layer) is then to have one layer of Flatten after Flatten layers for the characteristic pattern size in each channel to be reduced to 1 Layer (levelling blanket), the characteristic pattern " pressing " of input is extremely one-dimensional, it is complete for one layer later for being transitioned into full articulamentum from convolutional layer Articulamentum, node 2 generate the label of prediction using softmax function.
CNN-2 and CNN-1 is different in total number of plies, and CNN-2 shares 31 layers of convolutional layer, the 3rd to the 5th in CNN-2 Dense block is double-layer structure, and growth_rate is 32.
Training method:
In training DenseU-Net, increases the diversity of data using the method for real time data enhancing, there is Random Level Overturning, random shearing stretch, immediately the methods of rotation.DenseU-net is initialized with random parameter first;Secondly, 256 × 256 Input of the thyroid gland enhanced CT image as DenseU-net, the mask of identical size exercises supervision instruction to network as label Practice.
Loss function used by training DenseU-net is Dice loss, it is a kind of set measuring similarity function, The similarity degree for describing two contour areas is equivalent to F1score, formula are as follows:
Wherein A, B indicate remaining point set for being included of two profiles.So Dice loss can be indicated are as follows:
Wherein pn, tnRespectively predict each pixel value of mask and true mask, value is between 0~1.∈ is Smoothing coefficient reduces loss value, for reducing the over-fitting of network.
In training CNN-F, due to imbalanced training sets positive and negative in data set, the convergence speed of network in training process will lead to The problems such as degree is slow, and performance declines.Therefore, over-sampling is carried out to less positive sample herein, be allowed to it is consistent with negative sample quantity or It is approximate.Lack the influence of bring over-fitting simultaneously in order to reduce data volume, herein with one group of 2.5 ten thousand width in ImageNet data set Natural image carries out pre-training to CNN-1.The weight of CNN-1 is initialized by 30 iteration.Then with original enhanced CT figure As training CNN-1.Likewise, data will carry out data enhancing processing before entering network.
For CNN-2, its weight is initialized with random parameter.Then with DenseU-net generate mask treated only Image data training CNN-2 comprising thyroid gland roi.Loss function used by CNN-1 and CNN-2 is that more classification intersect Entropy, formula are as follows:
Wherein n is sample number, and m is classification number.
Then after training complete CNN-1 and CNN-2, both fusions are trained CNN-F.
Embodiment 2:
Verifying to 1 method of embodiment:
Five five folding cross validations have all been done to thyroid gland partitioned data set and good pernicious data set.Wherein four fold is for instructing Practice network, one is used to verify the performance of assessment models, and avoids the data separating of same patient from arriving on good pernicious data set Training set and verifying collect.Furthermore the appraisal procedure of DenseU-net is Dice coefficient, and the value of Dice coefficient more connects between 0-1 Nearly 1 illustrates that segmentation is more accurate.CNN-F is assessed by the area under accuracy rate and receiver operating characteristic (ROC) curve (AUC) The accuracy of its intelligent decision, the value of AUC between zero and one in the range of.
In training process, the objective function that DenseU-Net is used is Dice loss function, optimization method Adam, The index that the exponential decay rate that the learning rate of middle Adam is set as 0.0001, single order moments estimation is set as 0.9, second order moments estimation declines Lapse rate is 0.999.In addition, data are when training with 2 Zhang Weiyi batches input network.
The objective function that CNN-1 is used intersects entropy function (categorical crossentropy) for multiclass, is instructing in advance Optimization method is Adam when practicing, and the exponential decay rate that wherein learning rate of Adam is set as 0.001, single order moments estimation is set as 0.9, the exponential decay rate of second order moments estimation is 0.999.Using under the stochastic gradient with momentum when being finely adjusted later to CNN-1 It drops optimization algorithm (SGD), wherein the learning rate of SGD is set as 0.00001, momentum parameter 0.9, it is therefore an objective to slowly adjusting training Good weight is without destroying weight.The objective function that CNN-2 is used equally is that multiclass intersects entropy function, wherein the learning rate of Adam It is 0.999 that the exponential decay rate for being set as 0.001, single order moments estimation, which is set as the exponential decay rate of 0.9, second order moments estimation,.Most CNN-F objective function is similarly multiclass intersection entropy function afterwards, and optimization method is stochastic gradient descent optimization algorithm (SGD), study Rate is set as 0.00001.
Experiment carries out on same computer, which possesses at Intel Core i5-7400 (3.0GHz) CPU Device is managed, 8GB RAM, the Nvidia Geforce GTX1080TI graphics processor of a 11GB, operating system is 64 Windows 10.Running environment is python3.5 and using TensorFlow as the Keras of rear end.Training DenseU-Net when Between about 5 hours, CNN-1 of training and CNN-2 are about 4.75 hours time-consuming altogether.Divide on the DenseU-Net that training finishes 0.2 second needed for one thyroid gland CT image, an image is detected on CNN-F and spends about 0.4 second.
Performance of the improved DenseU-Net network of the research present invention in CT image analysis first, DenseU-Net's Detailed configuration is described in table 1.Fig. 6 is the segmentation figure generated by DenseU-Net.In Fig. 6, the first behavior original CT figure Picture, the mask that the second behavior DenseU-Net is automatically generated, the true mask of third behavior, this research is invented as can be seen from Figure 6 Improved network can be good at being partitioned into area-of-interest, even if can accurately still divide in the case where blur margin is clear Cut out area-of-interest.
Assess its performance by comparing the Dice coefficient of DenseU-Net and U-Net, Fig. 5 be tradition U-Net with Dice coefficient of the DenseU-Net on training set is with iteration wheel number change curve.From fig. 5, it can be seen that using the present invention The Dice coefficient of DenseU-net network structure is significantly higher than traditional U-Net, illustrates that image analysis accuracy of the invention is significant Higher than existing method.
Table 1
In order to verify CNN-1 by training learn be thyroid gland areas in CT image feature, rather than remaining Organ-tissue.We construct covering comparative test.Training set as shown in Figure 7 is made of original CT image, verifying collection -1 by DenseU-Net processed Roi image composition, verifying collection -2 is by eliminating the CT image construction of thyroid gland areas.
In order to assess the present invention hair CNN-F judgment accuracy, the present invention by VggNet network structure as a comparison, CNN- F is described in table 2, is assessed for convenience, these CNN have carried out five folding cross validations, reported in table 3 with 95% confidence The accuracy rate and AUC of these architectures in section, and P < 0.05 is considered statistically significant.Fig. 8 is shown The present invention is used to detect the ROC curve of the different CNN of thyroid nodule.
Table 2
Table 3
As known from Table 3, the judgment accuracy of CNN-F structure of the present invention reaches 95.73%, obtains in CT art of image analysis Significant progress.
Good pernicious being played a crucial role in optimal therapeutic quality and patient's prognosis of detection thyroid nodule. But due to CT artifact, the various complexity of thyroid gland surrounding tissue, the factors such as edge blurry, existing machine learning algorithm is difficult Cope with the detection of thyroid nodule in enhanced CT.The method of the present invention solves asking for the detection of the thyroid nodule based on enhanced CT Topic.The potential error and manual extraction feature level generated compared to image preprocessing inaccurate in conventional machines learning algorithm When the unreasonable of feature selecting lead to deviation of classifying, the method for the present invention can pass through and learn weight and deviation from data, generate By data-driven, the specific dense feature extractor of task, adequately using the 2D structure in image, to avoid above ask Topic.
DenseU-Net and CNN-F of the present invention can automatically extract effective feature from enhanced CT image, and nothing Relevant visual features need to be carried out any it is assumed that extracting different edges by well-trained convolution filter, different shapes Shape, the feature of different Mipmaps are simultaneously merged and are normalized, to identify thyroid good pernicious, while passing through Practise there are the CT images of artifact, in order to assess proposed DenseU-Net, in an experiment by 500 enhanced CT images with And corresponding segmentation mask has carried out five folding cross validations.Fig. 5 shows that DenseU-Net compares original U-Net, no matter in training Speed and final performance are all significantly superior, and Dice coefficient is from being promoted to 0.955, DenseU-Net net as can be seen from Figure 6 Network generates the effect of mask on CT image.The result shows that the improved network of the present invention well can be partitioned into thyroid gland Region.Even if can accurately still be partitioned into thyroid gland areas in the case where thyroid gland blur margin is clear.
The textural characteristics and lesion appearance that the method for the present invention is more concerned with CNN-2 inside study, by merging CNN-1 Just the advantage of the two can be combined to obtain better detection effect well with CNN-2.
The pernicious detection method of Benign Thyroid Nodules of the present invention includes two parts, as shown in Figure 1, first part passes through DenseU-Net, is divided that (data of CNN-1 processing are to the region thyreoidea area-of-interest in thyroid gland CT image automatically Untreated original CT image, and capture the sophisticated feature of CT image entirety and surrounding tissue), the second part two The fusion of the CNN network of a different structure, wherein CNN-1 is trained using original CT image, and CNN-2, which is utilized, passes through segmentation Treated that CT image is trained by mask, finally merges two networks into CNN-F, combining various features rank can judge Thyroid nodule it is good pernicious.
The present invention is carried out after dividing automatically using DenseU-Net network, recycles two kinds of deep neural network structures of fusion Good pernicious judgement is carried out with the CNN-F of feature selecting, The present invention reduces the parameter amounts of network entirety, reduce to a certain extent Over-fitting, downward gradient can effectively pass to the network layer of deep layer in the training process, so that convergence rate is more excellent, The finally completion region thyreoidea regional partition task that DenseU-Net can be outstanding.In the pernicious judgement of progress Benign Thyroid Nodules, The present invention is based on the detection method of CNN, the CNN-1 and CNN-2 that it has been respectively trained by two are merged, by allowing CNN-1 It trains and allows on original CT image on the thyroid gland CT image of CNN-2 after singulation and learn, make both fusions advantage CNN-F can obtain more excellent good pernicious judgement from multiple feature ranks.The result shows that the present invention can solve first very well The problem of shape gland tubercle good pernicious detection, it was demonstrated that its potential clinical application.It is objective that the method for the present invention can provide for doctor The second opinion, reduce because over fatigue and caused by mistaken diagnosis.
Original thyroid gland CT image of the present invention refers to that the thyroid gland CT without any image procossing shoots image, the present invention The advanced features refer to that the entirety and surrounding tissue image of captured parathyroid tissue, low-level features refer to thin inside parathyroid tissue Microtexture image.
It should be noted that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although referring to preferable Embodiment describes the invention in detail, those skilled in the art should understand that, it can be to technology of the invention Scheme is modified or replaced equivalently, and without departing from the spirit and scope of the technical solution of the present invention, should all be covered in this hair In bright scope of the claims.

Claims (10)

1. a kind of analysis method of complex texture CT image, it is characterised in that: including,
The dividing processing of original CT image;
Analyze the image Jing Guo the dividing processing;
Analyze original CT image;
Image co-registration.
2. the analysis method of complex texture CT image as shown in claim 1, it is characterised in that: point of the original CT image Processing is cut, using DenseU-Net neural network framework structure, the DenseU-Net, encoder equation are as follows:
xl=gl([x0, x1..., xl-1])
The glFor BN-ReLU-Conv ((1 × 1) × (4*growth_rate))-BN-ReLU-Conv ((3 × 3) × (growth_rate)), wherein the growth_rate is set as 32,32,64,64,128 totally five layers, [the x0, x1..., xl-1] indicate the preceding l-1 layers of parallel connection exported;
The DenseU-Net, decoder formula are as follows:
xl+n=fl+n([fl-n(xl-n)+xl-n, xl+n])
Wherein fl-nIndicate convolution operation, xl-nIndicate that l-n is exported, fl+nIndicate convolution operation.
3. the analysis method of complex texture CT image as claimed in claim 1 or 2, it is characterised in that: institute is passed through in the analysis The image for stating dividing processing uses CNN-2 network structure;The analysis original CT image, uses CNN-1 network structure;It is described Image co-registration uses CNN-F network structure;
The CNN-F network structure is composed in parallel by CNN-1 network structure and CNN-2 network structure, the CNN-1 capture CT figure As the advanced features of entirety and surrounding tissue, the data of the CNN-1 processing are the original CT image;The CNN-2 capture Subtle low-level features inside test serum, the data of the CNN-2 network structure processing are described in original CT image passes through DenseU-Net treated image, the CNN-1 and CNN-2 merge feature by cascading layers, later by connecting entirely Layer output result.
4. the analysis method of complex texture CT image as claimed in claim 3, it is characterised in that: the CNN-1 includes five Intensive block, shares 63 layers of convolutional layer, and the activation primitive that convolutional layer uses is line rectification function, can be through after each intensive block Cross window size be 2 × 2, stride be 2 maximum pond layer processing, after processing characteristic pattern size reduce half, in pond layer it Application batch specification layer afterwards, described batch of specification layer standardizes the activation value of preceding layer in each batch again, i.e., so that its is defeated The mean value of data is close to 0 out, and standard deviation is close to 1;There is one layer of levelling blanket after intensive block, connects layer, node entirely later for one layer It is 2, the label of prediction is generated using softmax function.
5. the analysis method of complex texture CT image as claimed in claim 4, it is characterised in that: the intensive block, Growth_rate is set as 32.
6. the analysis method of complex texture CT image as claimed in claim 3, it is characterised in that: the CNN-2 includes five Intensive block shares 31 layers of convolutional layer, in the CNN-2 third to the 5th intensive block be double-layer structure, what convolutional layer used Activation primitive is line rectification function, can be the maximum pond that 2 × 2, stride is 2 by window size after each intensive block Layer processing, characteristic pattern size reduces half after processing, and using batch specification layer after the layer of pond, described batch of specification layer is at each batch The activation value of preceding layer is standardized again on secondary, i.e., so that the mean value of its output data is close to 0, standard deviation is close to 1;Close Having one layer of average pond layer after glomeration for the characteristic pattern size in each channel to be reduced to 1 is then levelling blanket, Zhi Houwei One layer connects layer entirely, and node 2 generates the label of prediction using softmax function.
7. the analysis method of complex texture CT image as claimed in claim 6, it is characterised in that: the intensive block, Growth_rate is set as 32.
8. the analysis method of the complex texture CT image as described in any one of claim 1,2,4~7, it is characterised in that: The DenseU-Net, training method include, using Dice loss function, formula are as follows:
Wherein pn、tnRespectively predict each pixel value of mask and true mask, value is between 0~1, ∈ smoothing Coefficient.
9. the analysis method of complex texture CT image as claimed in claim 8, it is characterised in that: the training method, it is excellent Change method is Adam, and the exponential decay rate that wherein learning rate of Adam is set as 0.0001, single order moments estimation is set as 0.9, two The exponential decay rate of rank moments estimation is 0.999, and when training, data input network with two Zhang Weiyi batches.
10. the analysis method of complex texture CT image as claimed in claim 3, it is characterised in that: used when training CNN-1 Objective function is that multiclass intersects entropy function, and optimization method Adam, wherein the learning rate of Adam is set as 0.001, first moment and estimates The exponential decay rate that the exponential decay rate of meter is set as 0.9, second order moments estimation is 0.999, is adopted when being finely adjusted later to CNN-1 With the stochastic gradient descent optimization algorithm with momentum, wherein the learning rate of SGD is set as 0.00001, momentum parameter 0.9;Instruction The objective function used when practicing CNN-2 intersects entropy function for multiclass, and optimization method Adam, wherein the learning rate of Adam is set as 0.001, it is 0.999 that the exponential decay rate of single order moments estimation, which is set as the exponential decay rate of 0.9, second order moments estimation,;Training CNN-F The objective function used intersects entropy function for multiclass, and optimization method is stochastic gradient descent optimization algorithm, and learning rate is set as 0.00001。
CN201910334644.7A 2019-04-24 2019-04-24 Analysis method of complex texture CT image Active CN110084796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910334644.7A CN110084796B (en) 2019-04-24 2019-04-24 Analysis method of complex texture CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910334644.7A CN110084796B (en) 2019-04-24 2019-04-24 Analysis method of complex texture CT image

Publications (2)

Publication Number Publication Date
CN110084796A true CN110084796A (en) 2019-08-02
CN110084796B CN110084796B (en) 2023-07-14

Family

ID=67416591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910334644.7A Active CN110084796B (en) 2019-04-24 2019-04-24 Analysis method of complex texture CT image

Country Status (1)

Country Link
CN (1) CN110084796B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179273A (en) * 2019-12-30 2020-05-19 山东师范大学 Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning
CN113223003A (en) * 2021-05-07 2021-08-06 西安智诊智能科技有限公司 Bile duct image segmentation method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019025716A1 (en) * 2017-08-03 2019-02-07 Universite D'orleans Method and system for mapping the health status of crops
CN109598722A (en) * 2018-12-10 2019-04-09 杭州帝视科技有限公司 Image analysis method based on recurrent neural network
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019025716A1 (en) * 2017-08-03 2019-02-07 Universite D'orleans Method and system for mapping the health status of crops
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network
CN109598722A (en) * 2018-12-10 2019-04-09 杭州帝视科技有限公司 Image analysis method based on recurrent neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
叶晨等: "基于CNN迁移学习的甲状腺结节检测方法", 《计算机工程与应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179273A (en) * 2019-12-30 2020-05-19 山东师范大学 Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning
CN113223003A (en) * 2021-05-07 2021-08-06 西安智诊智能科技有限公司 Bile duct image segmentation method based on deep learning

Also Published As

Publication number Publication date
CN110084796B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN107977671A (en) A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN106682633B (en) The classifying identification method of stool examination image visible component based on machine vision
CN106780466A (en) A kind of cervical cell image-recognizing method based on convolutional neural networks
CN106127255B (en) Classification system of cancer digital pathological cell images
CN105139004B (en) Facial expression recognizing method based on video sequence
CN109886986A (en) A kind of skin lens image dividing method based on multiple-limb convolutional neural networks
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN101713776B (en) Neural network-based method for identifying and classifying visible components in urine
CN107909566A (en) A kind of image-recognizing method of the cutaneum carcinoma melanoma based on deep learning
CN107316307A (en) A kind of Chinese medicine tongue image automatic segmentation method based on depth convolutional neural networks
CN107437092A (en) The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
CN109767440A (en) A kind of imaged image data extending method towards deep learning model training and study
CN107506761A (en) Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN106096605A (en) A kind of image obscuring area detection method based on degree of depth study and device
CN107492095A (en) Medical image pulmonary nodule detection method based on deep learning
CN109508644A (en) Facial paralysis grade assessment system based on the analysis of deep video data
CN108492271A (en) A kind of automated graphics enhancing system and method for fusion multi-scale information
CN109711426A (en) A kind of pathological picture sorter and method based on GAN and transfer learning
CN109272048A (en) A kind of mode identification method based on depth convolutional neural networks
CN106096654A (en) A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN109063712A (en) A kind of multi-model Hepatic diffused lesion intelligent diagnosing method and system based on ultrasound image
CN107665492A (en) Colon and rectum panorama numeral pathological image tissue segmentation methods based on depth network
CN107066934A (en) Tumor stomach cell image recognition decision maker, method and tumor stomach section identification decision equipment
CN105005765A (en) Facial expression identification method based on Gabor wavelet and gray-level co-occurrence matrix
CN101866427A (en) Method for detecting and classifying fabric defects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230506

Address after: No. 131, Huancheng Road, Gulou District, Xuzhou City, Jiangsu Province, 221000

Applicant after: XUZHOU CANCER Hospital

Applicant after: Li Xiaofeng

Address before: Room 428, Building 2, West Campus of Xuzhou Medical University, No. 84 Huaihai West Road, Quanshan District, Xuzhou City, Jiangsu Province, 221004

Applicant before: XUZHOU YUNLIAN MEDICAL TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant