CN108470337A - A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature - Google Patents

A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature Download PDF

Info

Publication number
CN108470337A
CN108470337A CN201810280468.9A CN201810280468A CN108470337A CN 108470337 A CN108470337 A CN 108470337A CN 201810280468 A CN201810280468 A CN 201810280468A CN 108470337 A CN108470337 A CN 108470337A
Authority
CN
China
Prior art keywords
feature
layer
layers
parameter
lung neoplasm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810280468.9A
Other languages
Chinese (zh)
Inventor
冯宝
陈相猛
刘壮盛
龙晚生
李卓永
张朝同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangmen Central Hospital
Original Assignee
Jiangmen Central Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangmen Central Hospital filed Critical Jiangmen Central Hospital
Priority to CN201810280468.9A priority Critical patent/CN108470337A/en
Publication of CN108470337A publication Critical patent/CN108470337A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The sub- reality Lung neoplasm quantitative analysis method and system that the invention discloses a kind of based on picture depth feature, this method comprises the following steps:The acquisition and pretreatment of S1, lung CT image;S2, candidate Lung neoplasm ROI segmentations and extraction;S3, sub- reality Lung neoplasm picture depth feature is extracted using convolutional neural networks;S4, ELM prediction model of the structure based on group sparse constraint, introduce L1Norm prevents raising feature selecting stability from preventing over-fitting, addition group sparse constraint from coming lift scheme robustness and generalization ability.The present invention predicts its pathology aggressivity (before infiltration, micro- infiltration, infiltration) using sub- reality Lung neoplasm CT images feature, by extracting the picture depth feature of sub- reality Lung neoplasm, can complete to the rodent quantitative analysis of its pathology.

Description

A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature
Technical field
The present invention relates to art of image analysis, and in particular to a kind of sub- reality Lung neoplasm based on picture depth feature is quantitative Analysis method and system.
Background technology
Lung cancer is the primary cause of disease of global tumor lethal.Although 5 years survival rates of lung cancer are relatively low, the diagnosis of the early stage of lung cancer It is remarkably improved postoperative 5 years survival rates with treatment.Research finds that sub- reality Lung neoplasm and the early stage of lung cancer are closely related.With lung Conventional CT examination and low-dose CT screening it is increasing, the recall rate of sub- reality Lung neoplasm increasingly increases.Sub- reality Lung neoplasm Conventional diagnostic mode depend on the experience of radiation doctor, there are problems that quantitative analysis difficult, how to utilize forward position Image processing techniques quantitatively extracts the characteristics of image of sub- reality Lung neoplasm, and to the (infiltration of the pathology aggressivity of sub- reality Lung neoplasm Before, infiltration, micro- infiltration) carry out prediction have important clinical meaning.
The feature extraction of traditional Lung neoplasm image is primarily upon the textural characteristics of focal zone, shape feature etc..Textural characteristics Generally use algorithm of co-matrix is extracted, to obtain the local gray-value changing rule of focal zone image.In addition, small echo becomes Transducing enough effectively removes the interference of noise on image, is enhanced focal zone image using wavelet transformation, and combines gray scale total Raw matrix method texture feature extraction effectively can carry out quantitative analysis to the Lung neoplasm image of regular edges.However it is sub- real Property Lung neoplasm is since edge blurry, in irregular shape, real property composition transfer randomness are big, thus it is difficult to ensure that extracted feature Stability brings difficulty for quantitative analysis.
Traditional Lung neoplasm feature extraction usually defines magnanimity feature, and therefrom selection and the maximally related feature of pathology aggressivity It is predicted eroding property, traditional characteristic selection uses LASSO methods, the disadvantage is that 1 is directed to reality ingredient in sub- reality Lung neoplasm Structuring " agglomerate (cluster) " priori feature is spatially shown, LASSO only considers sparsity and has ignored this structuring Priori causes selected feature stability poor;2 hyper parameters need the method using cross validation when determining, computationally intensive;3, LASSO methods are based on linear model, and there is no consider non-linear factor.
In recent years, it is received significant attention and at image with the deep learning algorithm that convolutional neural networks (CNN) are representative Reason field achieves good application effect.By deep learning process, the data information hidden in original input data can It is abstracted with successively extracting, the number of plies is deeper, and the feature extracted is also all the more stablized, this is that shallow structure and traditional characteristic carry Mode is taken to be beyond expression and obtain.
Invention content
To solve the above problems, the present invention provides a kind of sub- reality Lung neoplasm quantitative analysis based on picture depth feature Method and system carry out its pathology aggressivity (before infiltration, micro- infiltration, infiltration) using sub- reality Lung neoplasm CT images feature Prediction can be completed by extracting the picture depth feature of sub- reality Lung neoplasm to the rodent quantitative analysis of its pathology.
To achieve the above object, the technical solution that the present invention takes is:
A kind of sub- reality Lung neoplasm quantitative analysis method based on picture depth feature, includes the following steps:
The acquisition and pretreatment of S1, lung CT image;
Using high resolution computer tomography (Computed Tomography, CT) technical limit spacing pulmonary data, and It is uploaded to the pretreatment that computer aided detection (Computed Aidede Detection, CAD) system carries out image data; Wherein, original image is divided into a threshold value using maximum entropy as principle using maximum entropy threshold method in binary conversion treatment Then the image segmentation of binaryzation is several 8 connected regions, retains largest connected region and handle by region of interest and background area Remaining region removes, and original image is reconstructed in the position in correspondence largest connected region later;
S2, candidate Lung neoplasm ROI segmentations and extraction;
Candidate Lung neoplasm ROI is protruded using image enhancement technique, finally passes through image reconstruction, schemes in obtained ROI region As identical as original image gray scale, and ROI exterior domain gray scales are set to 0 image, that is, blacking, by enhancing technology above, The candidate nodule of stand alone can be extracted;
S3, sub- reality Lung neoplasm picture depth feature is extracted using convolutional neural networks;
The Lung neoplasm extracted is input to input layer as sample, by convolution, obtains hidden layer C1(convolutional layer) Feature extraction layer, it is C layers each after and then down-sampling layer (pond layer) S, be also called Feature Mapping layer;S layers by feature Several regions are cut into, its average value is taken, obtain feature new, that dimension is smaller;The S layers of resolution ratio that can reduce characteristic pattern, also Susceptibility of the output for displacement can be reduced.Single layer feature vector after feature extraction is indicated by X.
S4, ELM prediction model of the structure based on group sparse constraint, introduce L1Norm prevents raising feature selecting stability anti- Only over-fitting, addition group sparse constraint come lift scheme robustness and generalization ability.
Preferably, the step S3 specifically comprises the following steps:
Step 3.1, feature extraction;
C layers are characterized extract layer, and each neuron is connected by the local receptor field with preceding layer, is carried by convolution algorithm Local feature is taken, its position relationship with other feature spaces is determined according to local feature;Convolution operation in CNN is artwork As first carrying out convolution with convolution kernel, a biasing is then added, using obtaining a feature after activation primitive, it is assumed that l layers It is convolutional layer, then l layers of j-th of feature exports a(l)It is:
Wherein, f () is activation primitive,Indicate i-th of the template for the convolution kernel j that j-th of characteristic pattern of l layers uses,Indicate the biasing of l j-th of characteristic pattern of layer, MjIndicate which l j-th of characteristic pattern of layer selected input;One convolution Core can have multiple template, then pick out the input of needs carries out convolution with each template of this convolution kernel, then sum, then In addition biasing is just needed a characteristic pattern after activation primitive;
Step 3.2, Feature Mapping;
S layers are Feature Mapping layers, and by local average operation, make all units on sample has equal weights, because And reduce the number of free parameter in CNN, reduce the complexity of network parameter selection;CNN down-sampling layers are only to convolution Layer output carries out scale down, therefore down-sampling layer will not change the quantity of output figure, and the output of l layers of j-th of figure calculates Mode is:
Wherein,It is biasing, f () is activation primitive, and it is linear sharp that f (), which may be selected, in simple down-sampling layer operation Function living, i.e. f (x)=x may be selected not bias, i.e.,In this case, average pondization operation is exactly in down-sampling In the region of layer, all elements summation is again divided by the element number in region;
After last time is down-sampled, is obtained by activation primitive f () and exported to the end
Y=f (a(l)·ω+b(l)) (3)
By multiple convolution and it is down-sampled after, last layer of characteristic pattern is subjected to full connection and obtains single layer feature vector, X;
The calculating of the filter c biasings b of step 3.3, convolution kernel k biasing b and down-sampling;
A cost function is defined first, it is assumed that input x exports y, label t, then under m sample input condition, production Raw mean error:
Occur over-fitting in order to prevent, trained weighting parameter is punished and (changes and claim weight decaying), λ is punishment system Number, for adjusting the shared proportion of weights punishment, cost function is:
Wherein,Indicate that the connection weight between l layers of i-th of node and l+1 layers of j-th of node, L represent The number of plies before, slIndicate that l layers do not include the node number biased;Target is to minimize cost function, is declined using gradient Method is updated parameter;Learning rate parameter alpha is defined, the update of parameter utilizes following formula:
Cost function C is as follows to weights and the partial derivative of biasing:
The error generated for each sample has using chain type Rule for derivation
WhereinIndicate that the sum of all inputs of l+1 j-th of node of layer, expression formula are:
δ indicates residual error, indicates the influence that a node generates the residual error of final output value
When l is output layer
δ(l)=-(t-a(l))·f′(z(l)) (14)
When l is other layers
δ(l)=(ω(l))T·δ(l+1)·f′(z(l)) (15)
Then formula (8) (9) can be write as
Preferably, the step S4 specifically comprises the following steps:
Build ELM mathematical models;Wherein, the mathematical model of ELM is:
In formula, weights omega is only exportedjIt is unknown, N number of equation in formula (12) is arranged, can be obtained:
Φ ω=t (18)
Wherein,
ω=[ω1 T…ωL T]1×L TAnd t=[t1 T…tN T]1×N T (20)
It can obtain object function:
Pass through parameter to be asked in bayes method iterative estimate object function;
ω is the output weight vector in conjunction with different characteristic, it is assumed that biasing b is that the zero-mean gaussian of the β reciprocal of variance becomes at random Amount;Class label t is modeled as the linear combination with the feature of additive Gaussian noise;Given training dataset (X, t), whereinIndicate design matrix X=(x1,…,xN)T, P is the dimension of feature vector, and class label is provided by t, can be with Write the likelihood of weight vector ω as multivariate Gaussian distribution
P (t Shu X, ω, β)=Ν (t Shu XTω, β-1) (22)
The prior distribution of ω is introduced to obtain Bayes's MAP solutions of ω, sparse Bayesian linear discriminant analysis (SBLDA) is closed It uses in the prior distribution of ω and is distributed with the multivariate Gaussian of zero mean vector and variance matrix diagonal element;It is specific and Speech is each weight vectors ωiIndividual hyper parameter α is seti, to generate hyper parameter vector α=(α1..., αP)T, this to Amount gives the diagonal element of variance matrix;This is as follows about the sparse prior of ω
There are the characteristics of unity block, proposition group sparse prior according to Lung neoplasm;For every group of output weights omegagThe super ginseng of setting one Number αg, i.e. the interior output weight parameter ω of a groupi, i ∈ Ig(IgIt is the vector for including g groups) share a hyper parameter αg, without It is for each input weight ωiOne individual hyper parameter is set;Group sparse prior distribution is as follows:
Wherein αgRepresent corresponding group output weights omegagPrecision, and
α=(α1,…,α12,…,α2G,…,αG) (25)
Since Gauss likelihood (relative to average value) is in the conjugacy of Gaussian prior, posteriority also will be Gauss and can be with Obtain the solution of a closing form;Posteriority can be expressed as
P (ω Shu t, X, α, β)=Ν (ω Shu m, ∑) (26)
Method by maximizing posterior probability (maximum posterior, MAP) can be in the hope of for giving training set Most possible ω values;ω Posterior probability distribution mean value m and variance ∑ are given by
M=β ∑s XTt (27)
-1=A+ β XTX (28)
A=diag (α) in formula;
Using maximum marginal possibility predication from training collective estimation hyper parameter α and β, by being integrated to output weights omega Marginal likelihood score p (t Shu α, β) is obtained, i.e.,
P (t Shu α, β)=∫ p (t Shu ω, β) p (ω Shu α) d ω (29)
By completing square of index, and using the canonical form of Gaussian normalization coefficient, log-likelihood can be write out Form
Hyper parameter α will be corresponded togIt is set as zero with the partial derivative of the log-likelihood of β, the maximum of hyper parameter can be obtained seemingly So estimation;Because the mean value m and variance ∑ of Posterior distrbutionp depend on α and β, the solution of these hyper parameters is independent with one What form provided, it is as follows
Wherein, mgIt is the g group components of Posterior distrbutionp mean value m;γiDefinition be
γi=1- αiii, i ∈ { 1,2 ..., P } (33)
Wherein, ∑iiIt is the diagonal components of Posterior distrbutionp variance.
The present invention also provides a kind of sub- reality Lung neoplasm quantified system analysis based on picture depth feature, including it is as follows Step:
Lung CT image data acquisition module obtains pulmonary data using high resolution computer tomography, and It is uploaded to the pretreatment that computer-aided detection system carries out image data;
Computer-aided detection system, the pretreatment for completing image, wherein maximum entropy is used in binary conversion treatment Original image is divided into region of interest and background area, then by binaryzation by threshold method using maximum entropy as principle with a threshold value Image segmentation be several 8 connected regions, retain largest connected region and remaining region removed, later in corresponding most Dalian Original image is reconstructed in the position in logical region;
Candidate Lung neoplasm ROI segmentations and extraction module protrude candidate Lung neoplasm ROI using image enhancement technique, most pass through afterwards Image reconstruction is crossed, image is identical as original image gray scale in obtained ROI region, and ROI exterior domain gray scales are set to 0 image;
Picture depth characteristic extracting module carries out carrying for sub- reality Lung neoplasm picture depth feature using convolutional neural networks It takes, specifically, the Lung neoplasm extracted is input to input layer as sample, by convolution, obtains hidden layer C1(convolution Layer) feature extraction layer, it is C layer each after follow a down-sampling layer S,;Feature is cut into several regions by S layers, takes it average Value, obtains feature new, that dimension is smaller;
ELM prediction models build module, for building the ELM prediction models based on group sparse constraint;
Lung neoplasm identification module, for convolutional neural networks extraction feature is pre- in constructed ELM as input data It surveys in model and is identified.
Preferably, described image depth characteristic extraction module carries out sub- reality Lung neoplasm image depth especially by following steps Spend the extraction of feature:
Step 3.1, feature extraction;
C layers are characterized extract layer, and each neuron is connected by the local receptor field with preceding layer, is carried by convolution algorithm Local feature is taken, its position relationship with other feature spaces is determined according to local feature;Convolution operation in CNN is artwork As first carrying out convolution with convolution kernel, a biasing is then added, using obtaining a feature after activation primitive, it is assumed that l layers It is convolutional layer, then l layers of j-th of feature exports a(l)It is:
Wherein, f () is activation primitive,Indicate i-th of the template for the convolution kernel j that j-th of characteristic pattern of l layers uses,Indicate the biasing of l j-th of characteristic pattern of layer, MjIndicate which l j-th of characteristic pattern of layer selected input;One convolution Core can have multiple template, then pick out the input of needs carries out convolution with each template of this convolution kernel, then sum, then In addition biasing is just needed a characteristic pattern after activation primitive;
Step 3.2, Feature Mapping;
S layers are Feature Mapping layers, and by local average operation, make all units on sample has equal weights, because And reduce the number of free parameter in CNN, reduce the complexity of network parameter selection;CNN down-sampling layers are only to convolution Layer output carries out scale down, therefore down-sampling layer will not change the quantity of output figure, and the output of l layers of j-th of figure calculates Mode is:
Wherein,It is biasing, f () is activation primitive, and it is linear sharp that f (), which may be selected, in simple down-sampling layer operation Function living, i.e. f (x)=x may be selected not bias, i.e.,In this case, average pondization operation is exactly in down-sampling In the region of layer, all elements summation is again divided by the element number in region;
After last time is down-sampled, is obtained by activation primitive f () and exported to the end
Y=f (a(l)·ω+b(l)) (3)
By multiple convolution and it is down-sampled after, last layer of characteristic pattern is subjected to full connection and obtains single layer feature vector, X;
The calculating of the filter c biasings b of step 3.3, convolution kernel k biasing b and down-sampling;
A cost function is defined first, it is assumed that input x exports y, label t, then under m sample input condition, production Raw mean error:
Occur over-fitting in order to prevent, trained weighting parameter is punished and (changes and claim weight decaying), λ is punishment system Number, for adjusting the shared proportion of weights punishment, cost function is:
Wherein,Indicate that the connection weight between l layers of i-th of node and l+1 layers of j-th of node, L represent The number of plies before, slIndicate that l layers do not include the node number biased;Target is to minimize cost function, is declined using gradient Method is updated parameter;Learning rate parameter alpha is defined, the update of parameter utilizes following formula:
Cost function C is as follows to weights and the partial derivative of biasing:
The error generated for each sample has using chain type Rule for derivation
WhereinIndicate that the sum of all inputs of l+1 j-th of node of layer, expression formula are:
δ indicates residual error, indicates the influence that a node generates the residual error of final output value
When l is output layer
δ(l)=-(t-a(l))·f′(z(l)) (14)
When l is other layers
δ(l)=(ω(l))T·δ(l+1)·f′(z(l)) (15)
Then formula (8) (9) can be write as
Preferably, the ELM prediction models structure module carries out the predictions of the ELM based on group sparse constraint by following steps The structure of model:
The mathematical model of ELM is:
In formula, weights omega is only exportedjIt is unknown, N number of equation in formula (12) is arranged, can be obtained:
Φ ω=t (18)
Wherein,
ω=[ω1 T…ωL T]1×L TAnd t=[t1 T…tN T]1×N T (20)
It can obtain object function:
Pass through parameter to be asked in bayes method iterative estimate object function;
ω is the output weight vector in conjunction with different characteristic, it is assumed that biasing b is that the zero-mean gaussian of the β reciprocal of variance becomes at random Amount;Class label t is modeled as the linear combination with the feature of additive Gaussian noise;Given training dataset (X, t), whereinIndicate design matrix X=(x1,…,xN)T, P is the dimension of feature vector, and class label is provided by t, can be with Write the likelihood of weight vector ω as multivariate Gaussian distribution
P (t Shu X, ω, β)=Ν (t Shu XTω, β-1) (22)
The prior distribution of ω is introduced to obtain Bayes's MAP solutions of ω, sparse Bayesian linear discriminant analysis (SBLDA) is closed It uses in the prior distribution of ω and is distributed with the multivariate Gaussian of zero mean vector and variance matrix diagonal element;It is specific and Speech is each weight vectors ωiIndividual hyper parameter α is seti, to generate hyper parameter vector α=(α1..., αP)T, this to Amount gives the diagonal element of variance matrix;This is as follows about the sparse prior of ω
There are the characteristics of unity block, proposition group sparse prior according to Lung neoplasm;For every group of output weights omegagThe super ginseng of setting one Number αg, i.e. the interior output weight parameter ω of a groupi, i ∈ Ig(IgIt is the vector for including g groups) share a hyper parameter αg, without It is for each input weight ωiOne individual hyper parameter is set;Group sparse prior distribution is as follows:
Wherein αgRepresent corresponding group output weights omegagPrecision, and
α=(α1,…,α12,…,α2G,…,αG) (25)
Since Gauss likelihood (relative to average value) is in the conjugacy of Gaussian prior, posteriority also will be Gauss and can be with Obtain the solution of a closing form;Posteriority can be expressed as
P (ω Shu t, X, α, β)=Ν (ω Shu m, ∑) (26)
Method by maximizing posterior probability (maximum posterior, MAP) can be in the hope of for giving training set Most possible ω values;ω Posterior probability distribution mean value m and variance ∑ are given by
M=β ∑s XTt (27)
-1=A+ β XTX (28)
A=diag (α) in formula;
Using maximum marginal possibility predication from training collective estimation hyper parameter α and β, by being integrated to output weights omega Marginal likelihood score p (t Shu α, β) is obtained, i.e.,
P (t Shu α, β)=∫ p (t Shu ω, β) p (ω Shu α) d ω (29)
By completing square of index, and using the canonical form of Gaussian normalization coefficient, log-likelihood can be write out Form
Hyper parameter α will be corresponded togIt is set as zero with the partial derivative of the log-likelihood of β, the maximum of hyper parameter can be obtained seemingly So estimation;Because the mean value m and variance ∑ of Posterior distrbutionp depend on α and β, the solution of these hyper parameters is independent with one What form provided, it is as follows
Wherein, mgIt is the g group components of Posterior distrbutionp mean value m;γiDefinition be
γi=1- αiii, i ∈ { 1,2 ..., P } (33)
Wherein, ∑iiIt is the diagonal components of Posterior distrbutionp variance.
The invention has the advantages that:
1, the characteristics of being directed to sub- solid nodules reality complicated component and obscurity boundary, it is special using CNN extraction picture depths Sign is conducive to the rodent characteristics of image of the sub- reality Lung neoplasm pathology of Overall Acquisition description.
2, it is automatically selected carrying out feature using ELM frames and is built with prediction model, over-fitting in order to prevent, in ELM bases L1 norm constraints are added on plinth, to improve the stability of feature selecting;It is defeated for ELM in order to obtain the robustness of prediction model Go out weight application group sparse constraint, improves the generalization ability of model.
3, the regularization in ELM models is solved using Bayesian frame, on the one hand avoids cross validation band The excessive problem of next calculation amount;On the other hand, by maximizing the adaptive of target likelihood function implementation model regularization parameter It should estimate, improve the accuracy of model.
Description of the drawings
Fig. 1 is sub- reality Lung neoplasm pathology type prediction flow chart.
Fig. 2 is the feature extraction of Lung neoplasm ROI region and sorting algorithm figure.
Specific implementation mode
In order to make objects and advantages of the present invention be more clearly understood, the present invention is carried out with reference to embodiments further It is described in detail.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to limit this hair It is bright.
The sub- reality Lung neoplasm quantitative analysis method based on picture depth feature that an embodiment of the present invention provides a kind of, including Following steps:
The acquisition and pretreatment of S1, lung CT image;
Using high resolution computer tomography (Computed Tomography, CT) technical limit spacing pulmonary data, and It is uploaded to the pretreatment that computer aided detection (Computed Aidede Detection, CAD) system carries out image data; Wherein, original image is divided into a threshold value using maximum entropy as principle using maximum entropy threshold method in binary conversion treatment Then the image segmentation of binaryzation is several 8 connected regions, retains largest connected region and handle by region of interest and background area Remaining region removes, and original image is reconstructed in the position in correspondence largest connected region later;
S2, candidate Lung neoplasm ROI segmentations and extraction;
Candidate Lung neoplasm ROI is protruded using image enhancement technique, finally passes through image reconstruction, schemes in obtained ROI region As identical as original image gray scale, and ROI exterior domain gray scales are set to 0 image, that is, blacking, by enhancing technology above, The candidate nodule of stand alone can be extracted;
S3, sub- reality Lung neoplasm picture depth feature is extracted using convolutional neural networks;
The Lung neoplasm extracted is input to input layer as sample, by convolution, obtains hidden layer C1(convolutional layer) Feature extraction layer, it is C layers each after and then down-sampling layer (pond layer) S, be also called Feature Mapping layer;S layers by feature Several regions are cut into, its average value is taken, obtain feature new, that dimension is smaller;The S layers of resolution ratio that can reduce characteristic pattern, also Susceptibility of the output for displacement can be reduced.Single layer feature vector after feature extraction is indicated by X.
S4, ELM prediction model of the structure based on group sparse constraint, introduce L1Norm prevents raising feature selecting stability anti- Only over-fitting, addition group sparse constraint come lift scheme robustness and generalization ability.
The step S3 specifically comprises the following steps:
Step 3.1, feature extraction;
C layers are characterized extract layer, and each neuron is connected by the local receptor field with preceding layer, is carried by convolution algorithm Local feature is taken, its position relationship with other feature spaces is determined according to local feature;Convolution operation in CNN is artwork As first carrying out convolution with convolution kernel, a biasing is then added, using obtaining a feature after activation primitive, it is assumed that l layers It is convolutional layer, then l layers of j-th of feature exports a(l)It is:
Wherein, f () is activation primitive,Indicate i-th of the template for the convolution kernel j that j-th of characteristic pattern of l layers uses,Indicate the biasing of l j-th of characteristic pattern of layer, MjIndicate which l j-th of characteristic pattern of layer selected input;One convolution Core can have multiple template, then pick out the input of needs carries out convolution with each template of this convolution kernel, then sum, then In addition biasing is just needed a characteristic pattern after activation primitive;
Step 3.2, Feature Mapping;
S layers are Feature Mapping layers, and by local average operation, make all units on sample has equal weights, because And reduce the number of free parameter in CNN, reduce the complexity of network parameter selection;CNN down-sampling layers are only to convolution Layer output carries out scale down, therefore down-sampling layer will not change the quantity of output figure, and the output of l layers of j-th of figure calculates Mode is:
Wherein,It is biasing, f () is activation primitive, and it is linear sharp that f (), which may be selected, in simple down-sampling layer operation Function living, i.e. f (x)=x may be selected not bias, i.e.,In this case, average pondization operation is exactly in down-sampling In the region of layer, all elements summation is again divided by the element number in region;
After last time is down-sampled, is obtained by activation primitive f () and exported to the end
Y=f (a(l)·ω+b(l)) (3)
By multiple convolution and it is down-sampled after, last layer of characteristic pattern is subjected to full connection and obtains single layer feature vector, X;
The calculating of the filter c biasings b of step 3.3, convolution kernel k biasing b and down-sampling;
A cost function is defined first, it is assumed that input x exports y, label t, then under m sample input condition, production Raw mean error:
Occur over-fitting in order to prevent, trained weighting parameter is punished and (changes and claim weight decaying), λ is punishment system Number, for adjusting the shared proportion of weights punishment, cost function is:
Wherein,Indicate that the connection weight between l layers of i-th of node and l+1 layers of j-th of node, L represent The number of plies before, slIndicate that l layers do not include the node number biased;Target is to minimize cost function, is declined using gradient Method is updated parameter;Learning rate parameter alpha is defined, the update of parameter utilizes following formula:
Cost function C is as follows to weights and the partial derivative of biasing:
The error generated for each sample has using chain type Rule for derivation
WhereinIndicate that the sum of all inputs of l+1 j-th of node of layer, expression formula are:
δ indicates residual error, indicates the influence that a node generates the residual error of final output value
When l is output layer
δ(l)=-(t-a(l))·f′(z(l)) (14)
When l is other layers
δ(l)=(ω(l))T·δ(l+1)·f′(z(l)) (15)
Then formula (8) (9) can be write as
The step S4 specifically comprises the following steps:
Build ELM mathematical models;Wherein, the mathematical model of ELM is:
In formula, weights omega is only exportedjIt is unknown, N number of equation in formula (12) is arranged, can be obtained:
Φ ω=t (18)
Wherein,
It can obtain object function:
Pass through parameter to be asked in bayes method iterative estimate object function;
ω is the output weight vector in conjunction with different characteristic, it is assumed that biasing b is that the zero-mean gaussian of the β reciprocal of variance becomes at random Amount;Class label t is modeled as the linear combination with the feature of additive Gaussian noise;Given training dataset (X, t), whereinIndicate design matrix X=(x1,…,xN)T, P is the dimension of feature vector, and class label is provided by t, can be with Write the likelihood of weight vector ω as multivariate Gaussian distribution
P (t Shu X, ω, β)=Ν (t Shu XTω, β-1) (22)
The prior distribution of ω is introduced to obtain Bayes's MAP solutions of ω, sparse Bayesian linear discriminant analysis (SBLDA) is closed It uses in the prior distribution of ω and is distributed with the multivariate Gaussian of zero mean vector and variance matrix diagonal element;It is specific and Speech is each weight vectors ωiIndividual hyper parameter α is seti, to generate hyper parameter vector α=(α1..., αP)T, this to Amount gives the diagonal element of variance matrix;This is as follows about the sparse prior of ω
There is the characteristics of unity block according to Lung neoplasm, relative to SBLDA in order to obtain the group sparsity structure of output weight, need to carry Go out a group sparse prior;For every group of output weights omegagOne hyper parameter α is setg, i.e. the interior output weight parameter ω of a groupi, i ∈ Ig (IgIt is the vector for including g groups) share a hyper parameter αg, rather than be each input weight ωiSetting one is individually super Parameter;Group sparse prior distribution is as follows:
Wherein αgRepresent corresponding group output weights omegagPrecision, and
α=(α1,…,α12,…,α2G,…,αG) (25)
Since Gauss likelihood (relative to average value) is in the conjugacy of Gaussian prior, posteriority also will be Gauss and can be with Obtain the solution of a closing form;Posteriority can be expressed as
P (ω Shu t, X, α, β)=Ν (ω Shu m, ∑) (26)
Method by maximizing posterior probability (maximum posterior, MAP) can be in the hope of for giving training set Most possible ω values;ω Posterior probability distribution mean value m and variance ∑ are given by
M=β ∑s XTt (27)
-1=A+ β XTX (28)
A=diag (α) in formula;
Using maximum marginal possibility predication from training collective estimation hyper parameter α and β, by being integrated to output weights omega Marginal likelihood score p (t Shu α, β) is obtained, i.e.,
P (t Shu α, β)=∫ p (t Shu ω, β) p (ω Shu α) d ω (29)
As in the determination of automatic correlation, when we maximize edge likelihood relative to α, multiple element αgBecome For infinity, and corresponding weight has the Posterior distrbutionp for concentrating on zero;Therefore, with basic letter in relevant group of these weights Number does not work in the prediction that model is done, and is effectively trimmed so as to cause a group sparse model;
Specifically, it by completing square of index, and using the canonical form of Gaussian normalization coefficient, can write out pair The form of number likelihood
Hyper parameter α will be corresponded togIt is set as zero with the partial derivative of the log-likelihood of β, the maximum of hyper parameter can be obtained seemingly So estimation;Because the mean value m and variance ∑ of Posterior distrbutionp depend on α and β, the solution of these hyper parameters is independent with one What form provided, it is as follows
Wherein, mgIt is the g group components of Posterior distrbutionp mean value m;γiDefinition be
γi=1- αiii, i ∈ { 1,2 ..., P } (33)
Wherein, ∑iiIt is the diagonal components of Posterior distrbutionp variance.
Step S4 may be summarized as follows:
1, ELM mathematical models are built;
2, to input weight and biasing random assignment;
3, initialization hyper parameter α and β;
4, the parameter of posterior probability p (ω Shu t, X, α, β) is calculated according to (27) and (28);
5, α and β is updated according to (31), (32) and (33);
6, check that formula (30) or the convergence of ω allow α ← α if being unsatisfactory for the condition of convergencenew, β ← βnew;And return to step Rapid 2;If meeting the condition of convergence, withTerminate;
So far calculated ω can be substituted into formula (17) i.e. training of the completion to ELM, the identification of Lung neoplasm can be carried out.
This specific implementation is directed to the ambiguity of the complexity and nodule boundary of sub- solid nodules reality composition range, utilizes CNN technologies extraction characteristics of image is conducive to excavate the solid nodules agglomerate feature in image, the convolution sum pondization behaviour in CNN technologies Work can effectively improve the feature extraction stability and robustness of sub- solid nodules.Feature choosing is being carried out using depth image feature When selecting with model construction, using extreme learning machine (ELM) theoretical frame.ELM frame training speeds are fast, and can obtain the overall situation most Excellent solution, is conducive to select and is associated with strongest invariant feature with pathology aggressivity.Simultaneously in order to improve the robustness of prediction model, The exceptional value in ELM output weights is inhibited to prevent over-fitting using L1 norm constraints, it is defeated come smooth ELM using group sparse constraint Go out weight to improve the generalization ability of ELM algorithms, improves the predictablity rate that model diagnoses sub- solid nodules aggressivity.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, without departing from the principle of the present invention, it can also make several improvements and retouch, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (6)

1. a kind of sub- reality Lung neoplasm quantitative analysis method based on picture depth feature, which is characterized in that include the following steps:
The acquisition and pretreatment of S1, lung CT image;
Pulmonary data is obtained using high resolution computer tomography, and is uploaded to computer-aided detection system progress The pretreatment of image data;Wherein, maximum entropy threshold method is used in binary conversion treatment, using maximum entropy as principle, with a threshold Original image is divided into region of interest and background area by value, is then several 8 connected regions by the image segmentation of binaryzation, is protected It stays largest connected region and remaining region is removed, original image is reconstructed in the position in correspondence largest connected region later;
S2, candidate Lung neoplasm ROI segmentations and extraction;
Candidate Lung neoplasm ROI is protruded using image enhancement technique, finally passes through image reconstruction, in obtained ROI region image with Original image gray scale is identical, and ROI exterior domain gray scales are set to 0 image;
S3, sub- reality Lung neoplasm picture depth feature is extracted using convolutional neural networks;
The Lung neoplasm extracted is input to input layer as sample, by convolution, obtains hidden layer C1Feature extraction layer, A down-sampling layer S is followed after C layers each;Feature is cut into several regions by S layers, takes its average value, obtain new, dimension compared with Small feature;
S4, ELM prediction model of the structure based on group sparse constraint, introduce L1Norm prevents raising feature selecting stability from preventing Fitting, addition group sparse constraint come lift scheme robustness and generalization ability.
2. a kind of sub- reality Lung neoplasm quantitative analysis method based on picture depth feature as described in claim 1, feature It is, the step S3 specifically comprises the following steps:
Step 3.1, feature extraction;
C layers are characterized extract layer, and each neuron is connected by the local receptor field with preceding layer, by convolution algorithm extraction office Portion's feature determines its position relationship with other feature spaces according to local feature;Convolution operation in CNN is original image elder generation Convolution is carried out with convolution kernel, a biasing is then added, using obtaining a feature after activation primitive, it is assumed that l layer are to roll up Lamination, then l layers of j-th of feature exports a(l)It is:
Wherein, f () is activation primitive,Indicate i-th of the template for the convolution kernel j that j-th of characteristic pattern of l layers uses,Table Show the biasing of l j-th of characteristic pattern of layer, MjIndicate which l j-th of characteristic pattern of layer selected input;One convolution kernel can There is multiple template, then pick out the input of needs carries out convolution with each template of this convolution kernel, then sums, add Biasing, after activation primitive, is just needed a characteristic pattern;
Step 3.2, Feature Mapping;
S layers are Feature Mapping layers, and by local average operation, make all units on sample has equal weights, thus subtracts The number for having lacked free parameter in CNN reduces the complexity of network parameter selection;CNN down-sampling layers are defeated to convolutional layer Go out and carry out scale down, therefore down-sampling layer will not change the quantity of output figure, the output calculation of l layers of j-th of figure For:
Wherein,It is biasing, f () is activation primitive, and it is linear activation letter that f (), which may be selected, in simple down-sampling layer operation Number, i.e. f (x)=x may be selected not bias, i.e.,In this case, average pondization operation is exactly in down-sampling layer In region, all elements summation is again divided by the element number in region;
After last time is down-sampled, is obtained by activation primitive f () and exported to the end
Y=f (a(l)·ω+b(l)) (3)
By multiple convolution and it is down-sampled after, last layer of characteristic pattern is subjected to full connection and obtains single layer feature vector, X;
The calculating of the filter c biasings b of step 3.3, convolution kernel k biasing b and down-sampling;
A cost function is defined first, it is assumed that input x exports y, label t, then under m sample input condition, generation Mean error:
Occur over-fitting in order to prevent, trained weighting parameter is punished and (changes and claim weight decaying), λ is penalty coefficient, is used Punish that shared proportion, cost function are to adjust weights:
Wherein,Before indicating that the connection weight between l layers of i-th of node and l+1 layers of j-th of node, L represent The number of plies, slIndicate that l layers do not include the node number biased;Target is to minimize cost function, using gradient descent method to ginseng Number is updated;Learning rate parameter alpha is defined, the update of parameter utilizes following formula:
Cost function C is as follows to weights and the partial derivative of biasing:
The error generated for each sample has using chain type Rule for derivation
WhereinIndicate that the sum of all inputs of l+1 j-th of node of layer, expression formula are:
δ indicates residual error, indicates the influence that a node generates the residual error of final output value
When l is output layer
δ(l)=-(t-a(l))·f′(z(l)) (14)
When l is other layers
δ(l)=(ω(l))T·δ(l+1)·f′(z(l)) (15)
Then formula (8) (9) can be write as
3. a kind of sub- reality Lung neoplasm quantitative analysis method based on picture depth feature as described in claim 1, feature It is, the step S4 specifically comprises the following steps:
Build ELM mathematical models;Wherein, the mathematical model of ELM is:
In formula, weights omega is only exportedjIt is unknown, N number of equation in formula (12) is arranged, can be obtained:
Φ ω=t (18)
Wherein,
It can obtain object function:
Pass through parameter to be asked in bayes method iterative estimate object function;
ω is the output weight vector in conjunction with different characteristic, it is assumed that biasing b is the zero-mean gaussian stochastic variable of the β reciprocal of variance; Class label t is modeled as the linear combination with the feature of additive Gaussian noise;Given training dataset (X, t), whereinIndicate design matrix X=(x1,…,xN)T, P is the dimension of feature vector, and class label is provided by t, can be with Write the likelihood of weight vector ω as multivariate Gaussian distribution
P (t Shu X, ω, β)=Ν (t Shu XTω, β-1) (22)
The prior distribution of ω is introduced to obtain Bayes's MAP solutions of ω, sparse Bayesian linear discriminant analysis (SBLDA) is about ω Prior distribution use with the multivariate Gaussian of zero mean vector and variance matrix diagonal element be distributed;Specifically, For each weight vectors ωiIndividual hyper parameter α is seti, to generate hyper parameter vector α=(α1..., αP)T, this vector gives The diagonal element of variance matrix is gone out;This is as follows about the sparse prior of ω
There are the characteristics of unity block, proposition group sparse prior according to Lung neoplasm;For every group of output weights omegagOne hyper parameter α is setg, That is output weight parameter ω in a groupi, i ∈ Ig(IgIt is the vector for including g groups) share a hyper parameter αg, rather than be Each input weight ωiOne individual hyper parameter is set;Group sparse prior distribution is as follows:
Wherein αgRepresent corresponding group output weights omegagPrecision, and
α=(α1,…,α12,…,α2G,…,αG) (25)
Since Gauss likelihood (relative to average value) is in the conjugacy of Gaussian prior, posteriority also will be Gauss and can obtain The solution of one closing form;Posteriority can be expressed as
P (ω Shu t, X, α, β)=Ν (ω Shu m, ∑) (26)
Method by maximizing posterior probability (maximum posterior, MAP) can be in the hope of for giving training set most Possible ω values;ω Posterior probability distribution mean value m and variance ∑ are given by
M=β ∑s XTt (27)
-1=A+ β XTX (28)
A=diag (α) in formula;
Using maximum marginal possibility predication from training collective estimation hyper parameter α and β, by being integrated to obtain to output weights omega Marginal likelihood score p (t Shu α, β), i.e.,
P (t Shu α, β)=∫ p (t Shu ω, β) p (ω Shu α) d ω (29)
By completing square of index, and using the canonical form of Gaussian normalization coefficient, the form of log-likelihood can be write out
Hyper parameter α will be corresponded togIt is set as zero with the partial derivative of the log-likelihood of β, the maximum likelihood that can obtain hyper parameter is estimated Meter;Because the mean value m and variance ∑ of Posterior distrbutionp depend on α and β, the solution of these hyper parameters is in the form of one independent It provides, it is as follows
Wherein, mgIt is the g group components of Posterior distrbutionp mean value m;γiDefinition be
γi=1- αiii, i ∈ { 1,2 ..., P } (33)
Wherein, ∑iiIt is the diagonal components of Posterior distrbutionp variance.
4. a kind of sub- reality Lung neoplasm quantified system analysis based on picture depth feature, which is characterized in that include the following steps:
Lung CT image data acquisition module obtains pulmonary data using high resolution computer tomography, and uploads The pretreatment of image data is carried out to computer-aided detection system;
Computer-aided detection system, the pretreatment for completing image, wherein maximum entropy threshold is used in binary conversion treatment Original image is divided into region of interest and background area, then by the figure of binaryzation by method using maximum entropy as principle with a threshold value As being divided into several 8 connected regions, retaining largest connected region and remaining region being removed, later in the largest connected area of correspondence Original image is reconstructed in the position in domain;
Candidate Lung neoplasm ROI segmentations and extraction module protrude candidate Lung neoplasm ROI, finally by figure using image enhancement technique As rebuilding, image is identical as original image gray scale in obtained ROI region, and ROI exterior domain gray scales are set to 0 image;
Picture depth characteristic extracting module carries out the extraction of sub- reality Lung neoplasm picture depth feature using convolutional neural networks, Specifically, the Lung neoplasm extracted is input to input layer as sample, by convolution, hidden layer C is obtained1(convolutional layer) Feature extraction layer follows a down-sampling layer S after C layers each,;Feature is cut into several regions by S layers, is taken its average value, is obtained The feature smaller to new, dimension;
ELM prediction models build module, for building the ELM prediction models based on group sparse constraint;
Lung neoplasm identification module, for predicting mould in constructed ELM using convolutional neural networks extraction feature as input data It is identified in type.
5. a kind of sub- reality Lung neoplasm quantified system analysis based on picture depth feature as claimed in claim 4, feature It is, described image depth characteristic extraction module carries out carrying for sub- reality Lung neoplasm picture depth feature especially by following steps It takes:
Step 3.1, feature extraction;
C layers are characterized extract layer, and each neuron is connected by the local receptor field with preceding layer, by convolution algorithm extraction office Portion's feature determines its position relationship with other feature spaces according to local feature;Convolution operation in CNN is original image elder generation Convolution is carried out with convolution kernel, a biasing is then added, using obtaining a feature after activation primitive, it is assumed that l layer are to roll up Lamination, then l layers of j-th of feature exports a(l)It is:
Wherein, f () is activation primitive,Indicate i-th of the template for the convolution kernel j that j-th of characteristic pattern of l layers uses,Table Show the biasing of l j-th of characteristic pattern of layer, MjIndicate which l j-th of characteristic pattern of layer selected input;One convolution kernel can There is multiple template, then pick out the input of needs carries out convolution with each template of this convolution kernel, then sums, add Biasing, after activation primitive, is just needed a characteristic pattern;
Step 3.2, Feature Mapping;
S layers are Feature Mapping layers, and by local average operation, make all units on sample has equal weights, thus subtracts The number for having lacked free parameter in CNN reduces the complexity of network parameter selection;CNN down-sampling layers are defeated to convolutional layer Go out and carry out scale down, therefore down-sampling layer will not change the quantity of output figure, the output calculation of l layers of j-th of figure For:
Wherein,It is biasing, f () is activation primitive, and it is linear activation letter that f (), which may be selected, in simple down-sampling layer operation Number, i.e. f (x)=x may be selected not bias, i.e.,In this case, average pondization operation is exactly in down-sampling layer In region, all elements summation is again divided by the element number in region;
After last time is down-sampled, is obtained by activation primitive f () and exported to the end
Y=f (a(l)·ω+b(l)) (3)
By multiple convolution and it is down-sampled after, last layer of characteristic pattern is subjected to full connection and obtains single layer feature vector, X;
The calculating of the filter c biasings b of step 3.3, convolution kernel k biasing b and down-sampling;
A cost function is defined first, it is assumed that input x exports y, label t, then under m sample input condition, generation Mean error:
Occur over-fitting in order to prevent, trained weighting parameter is punished and (changes and claim weight decaying), λ is penalty coefficient, is used Punish that shared proportion, cost function are to adjust weights:
Wherein,Before indicating that the connection weight between l layers of i-th of node and l+1 layers of j-th of node, L represent The number of plies, slIndicate that l layers do not include the node number biased;Target is to minimize cost function, using gradient descent method pair Parameter is updated;Learning rate parameter alpha is defined, the update of parameter utilizes following formula:
Cost function C is as follows to weights and the partial derivative of biasing:
The error generated for each sample has using chain type Rule for derivation
WhereinIndicate that the sum of all inputs of l+1 j-th of node of layer, expression formula are:
δ indicates residual error, indicates the influence that a node generates the residual error of final output value
When l is output layer
δ(l)=-(t-a(l))·f′(z(l)) (14)
When l is other layers
δ(l)=(ω(l))T·δ(l+1)·f′(z(l)) (15)
Then formula (8) (9) can be write as
6. a kind of sub- reality Lung neoplasm quantified system analysis based on picture depth feature as claimed in claim 4, feature It is, the ELM prediction models structure module carries out the structure of the ELM prediction models based on group sparse constraint by following steps It builds:
The mathematical model of ELM is:
In formula, weights omega is only exportedjIt is unknown, N number of equation in formula (12) is arranged, can be obtained:
Φ ω=t (18)
Wherein,
It can obtain object function:
Pass through parameter to be asked in bayes method iterative estimate object function;
ω is the output weight vector in conjunction with different characteristic, it is assumed that biasing b is the zero-mean gaussian stochastic variable of the β reciprocal of variance; Class label t is modeled as the linear combination with the feature of additive Gaussian noise;Given training dataset (X, t), whereinIndicate design matrix X=(x1,…,xN)T, P is the dimension of feature vector, and class label is provided by t, can be with Write the likelihood of weight vector ω as multivariate Gaussian distribution
P (t Shu X, ω, β)=Ν (t Shu XTω, β-1) (22)
The prior distribution of ω is introduced to obtain Bayes's MAP solutions of ω, sparse Bayesian linear discriminant analysis (SBLDA) is about ω Prior distribution use with the multivariate Gaussian of zero mean vector and variance matrix diagonal element be distributed;Specifically, For each weight vectors ωiIndividual hyper parameter α is seti, to generate hyper parameter vector α=(α1..., αP)T, this vector gives The diagonal element of variance matrix is gone out;This is as follows about the sparse prior of ω
There are the characteristics of unity block, proposition group sparse prior according to Lung neoplasm;For every group of output weights omegagOne hyper parameter α is setg, That is output weight parameter ω in a groupi, i ∈ Ig(IgIt is the vector for including g groups) share a hyper parameter αg, rather than be Each input weight ωiOne individual hyper parameter is set;Group sparse prior distribution is as follows:
Wherein αgRepresent corresponding group output weights omegagPrecision, and
α=(α1,…,α12,…,α2G,…,αG) (25)
Since Gauss likelihood (relative to average value) is in the conjugacy of Gaussian prior, posteriority also will be Gauss and can obtain The solution of one closing form;Posteriority can be expressed as
P (ω Shu t, X, α, β)=Ν (ω Shu m, ∑) (26)
Method by maximizing posterior probability (maximum posterior, MAP) can be in the hope of for giving training set most Possible ω values;ω Posterior probability distribution mean value m and variance ∑ are given by
M=β ∑s XTt (27)
-1=A+ β XTX (28)
A=diag (α) in formula;
Using maximum marginal possibility predication from training collective estimation hyper parameter α and β, by being integrated to obtain to output weights omega Marginal likelihood score p (t Shu α, β), i.e.,
P (t Shu α, β)=∫ p (t Shu ω, β) p (ω Shu α) d ω (29)
By completing square of index, and using the canonical form of Gaussian normalization coefficient, the form of log-likelihood can be write out
Hyper parameter α will be corresponded togIt is set as zero with the partial derivative of the log-likelihood of β, the maximum likelihood that can obtain hyper parameter is estimated Meter;Because the mean value m and variance ∑ of Posterior distrbutionp depend on α and β, the solution of these hyper parameters is in the form of one independent It provides, it is as follows
Wherein, mgIt is the g group components of Posterior distrbutionp mean value m;γiDefinition be
γi=1- αiii, i ∈ { 1,2 ..., P } (33)
Wherein, ∑iiIt is the diagonal components of Posterior distrbutionp variance.
CN201810280468.9A 2018-04-02 2018-04-02 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature Pending CN108470337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810280468.9A CN108470337A (en) 2018-04-02 2018-04-02 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810280468.9A CN108470337A (en) 2018-04-02 2018-04-02 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature

Publications (1)

Publication Number Publication Date
CN108470337A true CN108470337A (en) 2018-08-31

Family

ID=63262336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810280468.9A Pending CN108470337A (en) 2018-04-02 2018-04-02 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature

Country Status (1)

Country Link
CN (1) CN108470337A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741349A (en) * 2019-01-24 2019-05-10 桂林航天工业学院 A kind of method of cerebral arterial thrombosis image segmentation
CN109816665A (en) * 2018-12-30 2019-05-28 苏州大学 A kind of fast partition method and device of optical coherence tomographic image
CN110110634A (en) * 2019-04-28 2019-08-09 南通大学 Pathological image polychromatophilia color separation method based on deep learning
CN110148467A (en) * 2019-05-16 2019-08-20 东北大学 A kind of Lung neoplasm device of computer aided diagnosis and method based on improvement CNN
CN110265095A (en) * 2019-05-22 2019-09-20 首都医科大学附属北京佑安医院 For HCC recurrence and construction method and the application of the prediction model and nomogram of RFS
CN110321793A (en) * 2019-05-23 2019-10-11 平安科技(深圳)有限公司 Check enchashment method, apparatus, equipment and computer readable storage medium
CN114862798A (en) * 2022-05-09 2022-08-05 华东师范大学 Multi-view representation learning method for tumor pathology auxiliary diagnosis
CN116935009A (en) * 2023-09-19 2023-10-24 中南大学 Operation navigation system for prediction based on historical data analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
CN105701506A (en) * 2016-01-12 2016-06-22 杭州电子科技大学 Improved method based on extreme learning machine (ELM) and sparse representation classification
CN106600584A (en) * 2016-12-07 2017-04-26 电子科技大学 Tsallis entropy selection-based suspected pulmonary nodule detection method
CN107280697A (en) * 2017-05-15 2017-10-24 北京市计算中心 Lung neoplasm grading determination method and system based on deep learning and data fusion
CN107301640A (en) * 2017-06-19 2017-10-27 太原理工大学 A kind of method that target detection based on convolutional neural networks realizes small pulmonary nodules detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
CN105701506A (en) * 2016-01-12 2016-06-22 杭州电子科技大学 Improved method based on extreme learning machine (ELM) and sparse representation classification
CN106600584A (en) * 2016-12-07 2017-04-26 电子科技大学 Tsallis entropy selection-based suspected pulmonary nodule detection method
CN107280697A (en) * 2017-05-15 2017-10-24 北京市计算中心 Lung neoplasm grading determination method and system based on deep learning and data fusion
CN107301640A (en) * 2017-06-19 2017-10-27 太原理工大学 A kind of method that target detection based on convolutional neural networks realizes small pulmonary nodules detection

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
PU HAO等: "A simple and effective method for image classification", 《IEEE XPLORE》 *
叶青等: "基于改进SBELM的耦合故障诊断方", 《西南交通大学学报》 *
王媛媛等: "基于集成卷积神经网络的肺部肿瘤计算机辅助诊断模型", 《生物医学工程学杂志》 *
郑光远等: "医学影像计算机辅助检测与诊断系统综述", 《HTTP://KNS.CNKI.NET/KCMS/DETAIL/11.2560.TP.20180111.1724.017.HTML》 *
陈洁等: "《反舰导弹武器系统的精度分析及效能评估》", 30 November 2017 *
韦鹏程等: "《大数据巨量分析与机器学习的整合与开发》", 31 May 2017 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816665A (en) * 2018-12-30 2019-05-28 苏州大学 A kind of fast partition method and device of optical coherence tomographic image
CN109741349A (en) * 2019-01-24 2019-05-10 桂林航天工业学院 A kind of method of cerebral arterial thrombosis image segmentation
CN109741349B (en) * 2019-01-24 2021-12-07 江门市中心医院 Method for segmenting cerebral arterial thrombosis image
CN110110634A (en) * 2019-04-28 2019-08-09 南通大学 Pathological image polychromatophilia color separation method based on deep learning
CN110110634B (en) * 2019-04-28 2023-04-07 南通大学 Pathological image multi-staining separation method based on deep learning
CN110148467A (en) * 2019-05-16 2019-08-20 东北大学 A kind of Lung neoplasm device of computer aided diagnosis and method based on improvement CNN
CN110148467B (en) * 2019-05-16 2023-05-23 东北大学 Pulmonary nodule computer-aided diagnosis device and method based on improved CNN
CN110265095A (en) * 2019-05-22 2019-09-20 首都医科大学附属北京佑安医院 For HCC recurrence and construction method and the application of the prediction model and nomogram of RFS
CN110321793A (en) * 2019-05-23 2019-10-11 平安科技(深圳)有限公司 Check enchashment method, apparatus, equipment and computer readable storage medium
CN114862798A (en) * 2022-05-09 2022-08-05 华东师范大学 Multi-view representation learning method for tumor pathology auxiliary diagnosis
CN116935009A (en) * 2023-09-19 2023-10-24 中南大学 Operation navigation system for prediction based on historical data analysis
CN116935009B (en) * 2023-09-19 2023-12-22 中南大学 Operation navigation system for prediction based on historical data analysis

Similar Documents

Publication Publication Date Title
CN108470337A (en) A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature
CN111192245B (en) Brain tumor segmentation network and method based on U-Net network
Hoogi et al. Adaptive estimation of active contour parameters using convolutional neural networks and texture analysis
CN110533683B (en) Image omics analysis method fusing traditional features and depth features
Maulik Medical image segmentation using genetic algorithms
CN109598727A (en) A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network
CN107133496B (en) Gene feature extraction method based on manifold learning and closed-loop deep convolution double-network model
Kumar et al. An overview of segmentation algorithms for the analysis of anomalies on medical images
CN107203988B (en) A kind of method and its application for rebuilding three-dimensional volumetric image by two dimensional x-ray image
CN109977955A (en) A kind of precancerous lesions of uterine cervix knowledge method for distinguishing based on deep learning
CN110738662B (en) Pituitary tumor texture image grading method based on fine-grained medical image segmentation and truth value discovery data amplification
CN109871869B (en) Pulmonary nodule classification method and device
Wang et al. DBLCNN: Dependency-based lightweight convolutional neural network for multi-classification of breast histopathology images
Ge et al. Melanoma segmentation and classification in clinical images using deep learning
Sammouda Segmentation and analysis of CT chest images for early lung cancer detection
Mienye et al. Improved predictive sparse decomposition method with densenet for prediction of lung cancer
Alfifi et al. Enhanced artificial intelligence system for diagnosing and predicting breast cancer using deep learning
Li et al. Study on the detection of pulmonary nodules in CT images based on deep learning
Barrowclough et al. Binary segmentation of medical images using implicit spline representations and deep learning
Qayyum et al. Automatic segmentation using a hybrid dense network integrated with an 3D-atrous spatial pyramid pooling module for computed tomography (CT) imaging
CN109947960A (en) The more attribute Combined estimator model building methods of face based on depth convolution
Manoj et al. Automated brain tumor malignancy detection via 3D MRI using adaptive-3-D U-Net and heuristic-based deep neural network
Abramson et al. Anatomically-informed deep learning on contrast-enhanced cardiac MRI for scar segmentation and clinical feature extraction
Shetty et al. Optimized deformable model-based segmentation and deep learning for lung cancer classification
Hussain et al. Neuro-fuzzy system for medical image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180831