CN107507189A - Mouse CT image kidney dividing methods based on random forest and statistical model - Google Patents
Mouse CT image kidney dividing methods based on random forest and statistical model Download PDFInfo
- Publication number
- CN107507189A CN107507189A CN201710537961.XA CN201710537961A CN107507189A CN 107507189 A CN107507189 A CN 107507189A CN 201710537961 A CN201710537961 A CN 201710537961A CN 107507189 A CN107507189 A CN 107507189A
- Authority
- CN
- China
- Prior art keywords
- mrow
- kidney
- random forest
- organ
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30084—Kidney; Renal
Abstract
The invention belongs to technical field of medical image processing, discloses a kind of mouse CT image kidney dividing methods based on random forest and statistical model, including:Establish high-contrast organ and low contrast organ mean value model respectively based on training sample;Estimate the position of kidney in target image;Extract the feature of training sample and target image;Training random forest simultaneously completes Target Segmentation.The present invention constructs the feature representation for CT images so that random forest can accurate Ground Split CT images, solve the big data quantity in face of CT sequence images, random forest calculates the problem of complicated, speed is too slow;The over-fitting problem that statistical model may be brought can be avoided simultaneously, a small amount of Sample Establishing model can be used;Realize the segmentation of kidney in CT images, have accurately and fast, do not need human intervention the characteristics of, have important reference application value in fields such as medical image segmentations.
Description
Technical field
The invention belongs to technical field of medical image processing, more particularly to it is a kind of small based on random forest and statistical model
Mouse CT image kidney dividing methods.
Background technology
Of increasing concern while, it is imaged the Medical Imaging intercrossing subject high as a practical value
Technology is also constantly being reformed, and is occurred in succession such as computer tomography (Computed Tomography, CT), ultrasonic imaging
(Ultrasonography, US), magnetic resonance imaging (Magnetic Resonance Imaging, MRI) etc. are largely imaged skill
Art.Micro-CT (micro computed tomography, microcomputer layer scanning technology) is as a kind of important medical science
Image form obtains a wide range of applications in the clinical research to toy.Medical image describes various organ-tissues, knot
The details of structure and focus, provided for medical diagnosis on disease, pathology positioning, anatomical structure research, surgery planning and guidance etc.
Important evidence.Due to difference, wriggling, the presence of local bulk effect and the limit of imaging technique of organ of itself inside biological tissue
The reasons such as system, medical image would generally have that intensity profile is uneven, and edge blurry is indefinite and comprising noise or the characteristics of artifact, institute
With in order to provide more favourable quantitative analysis condition when influenceing processing and analysis to doctor, so as to improve diagnosis efficiency, medical science
Image is divided into for indispensable critical process, and realizes the accurate Fast Segmentation of organ, especially with relatively low contrast
The soft tissue of degree, just seem particularly important and great challenge.Image segmentation would generally use the discontinuous of feature between different objects
Characteristic similarity inside property and same target.Traditional image partition method based on threshold value or region, is only accounted for such as ash
The bottom-up information of the images such as angle value is, it is necessary to which more clearly image border or higher picture contrast could Accurate Segmentation mesh
Mark, and in general CT images can not meet above-mentioned condition, have in the absence of the gray value of obvious each organ-tissue of gray scale difference XOR
It is a range of overlapping, therefore such as threshold method, region-growing method are difficult to split the low contrast soft tissue organ in CT images;It is main
Skeleton pattern etc. is moved based on the method for deformation model because the constraint of its preset parameter and internal energy limits its geometric flexibility,
Topology can not arbitrarily be changed, and it is sensitive to initial position, needed during CT sequence images are split a large amount of first manually
The man-machine interactions such as beginningization ensure the accuracy of segmentation result;Based on the method for level set by regarding profile as a more higher-dimension
The zero level collection of the evolution function of curved surface, so as to solve the problems, such as the topological flexibility of deformation model, but the implicit table of more higher-dimension
Face formula is not easy to represent to apply any geometry or topological constraints on level set indirectly by more higher-dimension, therefore may be such that this
Class method limits its ease for use and automatic degree for the complicated inadequate robust of CT images;In recent years, based on statistical model
Method applied to more and more CT images organ segmentation or target detection in, it is necessary to substantial amounts of sample and label conduct
The problem of priori is to establish model, and be possible to bring over-fitting, and often without too many training sample in actual conditions;It is more
Although layer convolutional neural networks even depth learning algorithm is very high for the accuracy rate of CT image organ segmentations, it needs complicated
Model, the data of magnanimity and time-consuming training.
In summary, the problem of prior art is present be:CT images organ segmentation method low, the speed that accuracy be present at present
It is slower.
The content of the invention
The problem of existing for prior art, the invention provides a kind of mouse CT based on random forest and statistical model
Image kidney dividing method.
The present invention is achieved in that a kind of mouse CT image kidneys segmentation side based on random forest and statistical model
Method, the mouse CT image kidney dividing methods based on random forest and statistical model comprise the following steps:
Step 1, establish high-contrast organ and low contrast organ mean value model respectively based on training sample, use threshold
Value method and manual segmentation method obtain the skin of each sample in mouse CT image training samples;For each organ, select
One of training sample is registrated to remaining all training sample in the template as reference templates, obtains the average of organ
Model;Joint skin, bone and lung mean value model generation high-contrast organ mean value model, using the mean value model of kidney as
Low contrast organ mean value model;
Step 2, the skin of target CT images, bone and lung are obtained using threshold method and manual segmentation method, obtains target
Image high-contrast organ, and it is used as the registering high-contrast organ mean value model of benchmark to obtain transformation matrix;According to biology
Transformation matrix is applied on low contrast organ mean value model, obtained with respect to consistency by the relative position between each organ of body
The low contrast organ of target image, complete the estimation of kidney position in target image;
Step 3, estimate position comprising kidney respectively to be included in training sample in the minimum hexahedron and target image of kidney
Region of the minimum hexahedron put as feature extraction, extract the feature of training sample and target image;Carried for each voxel
Take a characteristic vector, each characteristic vector includes the feature of four aspects, is gray-scale statistical characteristics respectively, curvature feature, line
Manage feature and space contextual feature;
Step 4, random forest is trained with the feature and corresponding label of training sample, the feature of target image is made
For the input of the random forest after training, the label of output is the segmentation result of kidney in target image.
Further, the equal of high-contrast organ and low contrast organ is established in the step 1 respectively based on training sample
Value model specifically includes:
(1) the CT images of N number of mouse are chosen as training sample, each sample is obtained using threshold value and manual segmentation method
Skin, bone, lung and kidney;
(2) skin of each sample, bone, lung and kidney are represented with cloud data, mesh generation obtains the point of each organ
Cloud represents;Registration is carried out using iteration closest approach algorithm to the cloud data of each organ respectively, calculates each organ in N number of sample
Mean value model;By skin, bone, the mean value model of lung and kidney is designated as M respectivelyskiN, Mtrunk, MlungAnd Mkidney;
(3) high-contrast organ mean value model M is chosenh=[Mskin, Mtrunk, Mlung], low contrast organ mean value model Ml
=Mkidney。
Further, estimate that the position of kidney in target image specifically includes in the step 2:
1) the high-contrast organ of image to be split, including skin O are obtained using threshold value or manual segmentation methodskin, bone
Bone OtrunkWith lung Olung, make target image high-contrast organ Oh=[Oskin, Otrunk, Olung];
2) to high-contrast organ mean value model MhWith target image high-contrast organ OhCarry out down-sampling and obtain MhsWith
Ohs, the method for down-sampling is random down-sampling, one kind in uniform grid filtering or non-uniform grid filtering, after down-sampling
Cloud data carries out registration using iteration closest approach algorithm, i.e., MhsIt is registrated to OhsObtain transformation matrix Tform;
3) to low contrast organ mean value model MlUsing transformation matrix TforM obtains Ol, as kidney in target image
Location estimation.
Further, specifically including for the feature of training sample and target image is extracted in the step 3:
A) k (1≤k≤N) individual training sample is chosen, to i-th (1≤i≤k) individual sample SiThe region for carrying out feature extraction is
The minimum hexahedron R of kidney is included in the samplei, the region that feature extraction is carried out to target image is to include OlThe face of minimum six
Body M;
B) for each voxel I in Ri and M, the patch of n × n centered on I is extracted, calculates its mean μ, side
Poor σ, degree of bias η and kurtosis λ, make patch gray-scale statistical characteristics T1=[μ, σ, η, λ];
C) it is equal m blocks to divide patch, calculates each piece of curvature, and is calculated as follows each piece of curvature entropy
WhereinR block image mean curvature values θ probability distribution is represented, makes patch curvature feature
D) Gabor is carried out to patch to filter to obtain Ω × Γ texture maps, Ω direction, Γ yardstick;Gabor is filtered
Device group is expressed as:
Wherein:
X '=αγ(xcos(ωΨ)+ysin(ωΨ));
Y '=αγ(-xsin(ωΨ)+ycos(ωΨ));
Ω be filter angle number, ω=0 ..., Ω -1, twiddle factor Ψ=π/Ω;Γ is the number of filter scale,
γ=0 ..., Γ -1, UhAnd UlDetermine the frequency range of Gabor filter group, W, V are frequency offset parameters, standard deviation sigmaxWith
σyRepresent wave filter in the bandwidth in x directions and y directions, scale factor a=(U respectivelyh/Ul)1/Γ-1;
Each texture maps are divided into equal m blocks, are calculated as follows each piece of texture entropy
WhereinRepresent the probability distribution of texture value tex in r block texture maps.The textural characteristics of the patch
E) with I distance d respectively1, d2, d3, d4Place takes a voxel every 45 °, obtains [X1..., Xi..., X32], extraction
With XiCentered on 3 × 3 region, obtain [Y1..., Yi..., Y32], calculate YiGray averageTexture valueAnd curvature
AverageMake the space contextual feature of the patch
F) construction feature vector P=[T1, T2, T3, T4], each characteristic vector only corresponds to a voxel;
G) sample S is madeiEigenmatrix Qi=[P1;…;Pα], α is RiIn the voxel number that includes, the feature of training sample
Matrix V=[Q1;…;Qk], the eigenmatrix W=[P of target image1;…;Pβ], β is the voxel number included in M.
Further, random forest is trained in the step 4 and completes specifically including for Target Segmentation:
(1) each label of voxel corresponding to characteristic vector is used as random forest using in the eigenmatrix V and V of training sample
Input, the parameter for optimizing random forest trains random forest;
(2) the eigenmatrix W of target image is input to each characteristic vector obtained in the random forest trained in W
The label of corresponding voxel, the segmentation result of kidney in target image.
Schemed another object of the present invention is to provide described in a kind of use based on the mouse CT of random forest and statistical model
As the microcomputer tomographic system of kidney dividing method.
Advantages of the present invention and good effect are:First, solve the big data quantity in face of CT sequence images, random forest
Calculate the problem of complicated, speed is too slow.Random forest is with characteristic vector corresponding to voxel and corresponding label (target or background)
As input, that is, need all to carry out feature extraction to all voxels for transferring to random forest to predict, after being positioned using statistical model,
The number for the voxel for needing to predict is greatly reduced on the basis of precision is ensured, so as to splitting phase using only random forest
Than method proposed by the present invention greatly improves the splitting speed of random forest.Second, a small amount of Sample Establishing model can be used
Come the over-fitting problem for avoiding statistical model from bringing.In order that priori is more abundant, the foundation of usual statistical model
Substantial amounts of sample is based on, this is possible to bring over-fitting, secondly, for medical data, does not often have in actual conditions
There is too many training sample, method proposed by the present invention can be used a small amount of Sample Establishing model, preferably resolve this problem.
3rd, random forest needs dominant feature representation, is difficult to if applying and the bottom layer image information such as gray scale being used only in CT images
Realize Target Segmentation, method construct proposed by the present invention is directed to the feature representation of CT images, enables random forest more smart
True Ground Split CT images.4th, method proposed by the present invention realizes the segmentation of kidney in CT images, has accurately and fast, no
The characteristics of needing human intervention.Its Dice coefficient can reach 0.97, and average surface distance can reach 0.4mm, and existing technology,
Including based on other of random forest or statistical model dividing method, maximum Dice coefficients are about 0.93, minimum average table
Identity distance is from about 0.8mm, therefore method proposed by the present invention realizes a certain degree of raising on segmentation precision.5th, this hair
The mouse CT image kidney dividing methods based on random forest and statistical model of bright proposition can also be used in other medical images
The segmentation of all kinds of soft tissue organs.
Brief description of the drawings
Fig. 1 is the mouse CT image kidney dividing methods provided in an embodiment of the present invention based on random forest and statistical model
Flow chart.
Fig. 2 is the registration result schematic diagram of each organ in training sample provided in an embodiment of the present invention.
Fig. 3 is the flow chart of kidney position in estimation target image provided in an embodiment of the present invention.
Fig. 4 is the flow chart of characteristic extraction procedure provided in an embodiment of the present invention.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to embodiments, to the present invention
It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to
Limit the present invention.
The application principle of the present invention is explained in detail below in conjunction with the accompanying drawings.
S101:Establish high-contrast organ and low contrast organ mean value model respectively based on training sample;
S102:Estimate the position of kidney in target image;
S103:Extract the feature of training sample and target image;
S104:Training random forest simultaneously completes Target Segmentation.
The application principle of the present invention is further described below in conjunction with the accompanying drawings.
As shown in figure 1, the mouse CT images kidney point provided in an embodiment of the present invention based on random forest and statistical model
Segmentation method comprises the following steps:
(1) high-contrast organ and low contrast organ mean value model are established based on training sample respectively;
The training sample that (1a) this example uses is the CT images of 4 mouse, splits to obtain each sample using Amira softwares
Skin, bone, lung and kidney;
(1b) carries out mesh generation to each organ and represents each organ with cloud data, and respectively to each organ
Cloud data is using iteration closest approach algorithm (ICP) registration and is calculated the skin of 4 samples, bone, lung and kidney it is equal
It is worth model, is designated as Mskin, Mtrunk, Mlung, Mkidney, Fig. 2 is the result pair before each organ registration and after registration in training sample
Than figure, organ of the organ of the 3rd mouse as other registering mouse of benchmark is selected;
(1c) makes high-contrast organ mean value model Mh=[Mskin, Mtrunk, Mlung], low contrast organ mean value model Ml
=Mkidney。
(2) position of kidney in target image is estimated, it is as follows referring to Fig. 3, detailed process:
(2a) obtains the cloud data of the high-contrast organ of target image, including skin O using Amira softwaresskin, bone
Bone OtrunkWith lung Olung, make target image high-contrast organ Oh=[Oskin, Otrunk, Olung];
(2b) is to high-contrast organ mean value model MhWith target image high-contrast organ OhCarry out down-sampling and obtain Mhs
And Ohs, the method for the down-sampling used is non-uniform grid filtering, and the cloud data after down-sampling is counted recently using iteration
Method carries out registration, i.e., MhsIt is registrated to OhsObtain transformation matrix Tform, the method for registering that this example uses is ICP registrations;
(2c) is to low contrast organ mean value model MlUsing transformation matrix TformObtain Ol, as kidney in target image
Location estimation.
(3) Fig. 4 carries out the flow of feature extraction to any voxel, extract training sample and target image feature it is complete
Step is as follows:
(3a) chooses k (1≤k≤N) individual training sample, to i-th (1≤i≤k) individual sample SiCarry out the region of feature extraction
To include the minimum hexahedron R of kidney in the samplei, k=1 in this example, the region that feature extraction is carried out to target image is bag
Containing OlMinimum hexahedron M;
(3b) is for RiEach voxel I with M, extracts the patch of n × n centered on I, calculates its mean μ,
Variances sigma, degree of bias η and kurtosis λ, make the gray-scale statistical characteristics T of the patch1=[μ, σ, η, λ], n=31 in this example;
(3c) divides patch to be equal m blocks, calculates each piece of curvature, and is calculated as follows each piece of curvature entropy
WhereinRepresent r block image mean curvature values θ probability distribution.Make the curvature feature of the patchM=9;
(3d) carries out Gabor filtering (Ω direction, Γ yardstick) to patch and obtains Ω × Γ texture maps.Gabor is filtered
Ripple device group can be expressed as:
Wherein:
X '=aγ(xcos(ωΨ)+ysin(ωΨ));
Y '=aγ(-xsin(ωΨ)+ycos(ωΨ));
Ω=4 be filtering angle (0 °, 45 °, 90 °, 135 °) number, ω=0 ..., Ω -1, twiddle factor Ψ=π/
Ω;Γ=2 are the numbers of filter scale (1,0.5), γ=0 ..., Γ -1, UhAnd UlDetermine the frequency of Gabor filter group
Scope, W, V are frequency offset parameters, are set:
Uh=0.1, Ul=0.025, W=V=Uh,σx=2, σy=4, scale factor a=(Uh/Ul)1/Γ-1。
Each texture maps are divided into equal m blocks, are calculated as follows each piece of texture entropy
WhereinRepresent the probability distribution of texture value tex in r block texture maps.The textural characteristics of the patch
(3e) as shown in the circle in Fig. 4, with I distance d respectively1, d2, d3, d4Place takes a voxel every 45 °, obtains
[X1..., Xi..., X32], [d in this example1, d2, d3, d4]=[3,8,15,22], extract with XiCentered on 3 × 3 region, obtain
To [Y1..., Yi..., Y32], calculate YiGray averageTexture value(i.e. the average of texture maps, texture maps are through direction
90 °, the Gabor that yardstick is 1 filters to obtain) and curvature averageMake the space contextual feature of the patch
(3f) construction feature vector P=[T1, T2, T3, T4], each characteristic vector only corresponds to a voxel;
(3g) makes sample SiEigenmatrix Qi=[P1;…;P α], α is RiIn the voxel number that includes, final training
Eigenmatrix V=[the Q of sample1;…;Qk], the eigenmatrix W=[P of target image1;…;Pβ], β is the voxel included in M
Number.
(4) train random forest and complete Target Segmentation:
(4a) is with label (target or the back of the body of each voxel corresponding to characteristic vector in the eigenmatrix V and V of training sample
Scape) input as random forest, the parameter for optimizing random forest trains random forest;
(4b) the eigenmatrix W of target image is input to obtained in the random forest trained each feature in W to
The segmentation result of kidney in the label of voxel corresponding to amount, i.e. target image.
The application effect of the present invention is explained in detail with reference to concrete application embodiment.
With being really completely superposed, Dice coefficients are defined as follows the segmentation result of kidney:
Wherein RrRepresent the region of true kidney, RsThe region of this example segmentation result is represented, ∩ represents the friendship in two regions
Collection, | | represent the quantity of voxel contained by region.Dice coefficients represent that segmentation result is better between 0 to 1, and closer to 1.
Dice coefficients are equal to 0.97, and the average surface distance of the two is 0.4mm.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
All any modification, equivalent and improvement made within refreshing and principle etc., should be included in the scope of the protection.
Claims (6)
1. a kind of mouse CT image kidney dividing methods based on random forest and statistical model, it is characterised in that described to be based on
The mouse CT image kidney dividing methods of random forest and statistical model comprise the following steps:
Step 1, establish high-contrast organ and low contrast organ mean value model respectively based on training sample, use threshold method
The skin of each sample in mouse CT image training samples is obtained with manual segmentation method;For each organ, select wherein
One training sample is registrated to remaining all training sample in the template as reference templates, obtains the mean value model of organ;
The mean value model generation high-contrast organ mean value model of joint skin, bone and lung, using the mean value model of kidney as low right
Than degree organ mean value model;
Step 2, the skin of target CT images, bone and lung are obtained using threshold method and manual segmentation method, obtains target image
High-contrast organ, and it is used as the registering high-contrast organ mean value model of benchmark to obtain transformation matrix;It is each according to organism
Transformation matrix is applied on low contrast organ mean value model with respect to consistency, obtains target by the relative position between organ
The low contrast organ of image, complete the estimation of kidney position in target image;
Step 3, kidney estimated location is included in the minimum hexahedron and target image of kidney to be included in training sample respectively
Region of the minimum hexahedron as feature extraction, extract the feature of training sample and target image;For each voxel extraction one
Individual characteristic vector, each characteristic vector include the feature of four aspects, are gray-scale statistical characteristics respectively, curvature feature, and texture is special
Seek peace space contextual feature;
Step 4, random forest is trained with the feature and corresponding label of training sample, using the feature of target image as instruction
The input of random forest after white silk, the label of output are the segmentation result of kidney in target image.
2. the mouse CT image kidney dividing methods based on random forest and statistical model as claimed in claim 1, its feature
It is, high-contrast organ is established based on training sample respectively in the step 1 and the mean value model of low contrast organ is specific
Including:
(1) the CT images of N number of mouse are chosen as training sample, the skin of each sample is obtained using threshold value and manual segmentation method
Skin, bone, lung and kidney;
(2) skin of each sample, bone, lung and kidney are represented with cloud data, mesh generation obtains the point cloud table of each organ
Show;Registration is carried out using iteration closest approach algorithm to the cloud data of each organ respectively, calculates the equal of each organ in N number of sample
It is worth model;By skin, bone, the mean value model of lung and kidney is designated as M respectivelyskin, Mtrunk, MlungAnd Mkidney;
(3) high-contrast organ mean value model M is chosenh=[Mskin, Mtrunk, Mlung], low contrast organ mean value model Ml=
Mkidney。
3. the mouse CT image kidney dividing methods based on random forest and statistical model as claimed in claim 1, its feature
It is, estimates that the position of kidney in target image specifically includes in the step 2:
1) the high-contrast organ of image to be split, including skin O are obtained using threshold value or manual segmentation methodskin, bone
OtrunkWith lung Olung, make target image high-contrast organ Oh=[Oskin, Otrunk, Olung];
2) to high-contrast organ mean value model MhWith target image high-contrast organ OhCarry out down-sampling and obtain MhsAnd Ohs, under
The method of sampling is random down-sampling, one kind in uniform grid filtering or non-uniform grid filtering, to the point cloud after down-sampling
Data carry out registration using iteration closest approach algorithm, i.e., MhsIt is registrated to OhsObtain transformation matrix Tform;
3) to low contrast organ mean value model MlUsing transformation matrix TformObtain Ol, the position as kidney in target image
Estimation.
4. the mouse CT image kidney dividing methods based on random forest and statistical model as claimed in claim 1, its feature
It is, the feature of extraction training sample and target image specifically includes in the step 3:
A) k (1≤k≤N) individual training sample is chosen, to i-th (1≤i≤k) individual sample SiThe region for carrying out feature extraction is the sample
The minimum hexahedron R of kidney is included in thisi, the region that feature extraction is carried out to target image is to include OlMinimum hexahedron M;
B) for RiWith each voxel I in M, the patch of n × n centered on I is extracted, calculates its mean μ, variances sigma, partially
η and kurtosis λ is spent, makes patch gray-scale statistical characteristics T1=[μ, σ, η, λ];
C) it is equal m blocks to divide patch, calculates each piece of curvature, and is calculated as follows each piece of curvature entropy
<mrow>
<msubsup>
<mi>&xi;</mi>
<mrow>
<mi>a</mi>
<mi>n</mi>
<mi>i</mi>
</mrow>
<mi>r</mi>
</msubsup>
<mo>=</mo>
<mo>-</mo>
<munder>
<mo>&Sigma;</mo>
<mi>&theta;</mi>
</munder>
<msubsup>
<mi>p</mi>
<mi>&theta;</mi>
<mi>r</mi>
</msubsup>
<mi>log</mi>
<mi> </mi>
<msubsup>
<mi>p</mi>
<mi>&theta;</mi>
<mi>r</mi>
</msubsup>
<mo>;</mo>
</mrow>
WhereinR block image mean curvature values θ probability distribution is represented, makes patch curvature feature
D) Gabor is carried out to patch to filter to obtain Ω × Γ texture maps, Ω direction, Γ yardstick;Gabor filter group
It is expressed as:
<mrow>
<msub>
<mi>G</mi>
<mrow>
<mi>&gamma;</mi>
<mo>,</mo>
<mi>&omega;</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mfrac>
<msup>
<mi>a</mi>
<mi>&gamma;</mi>
</msup>
<mrow>
<mn>2</mn>
<msub>
<mi>&pi;&sigma;</mi>
<mi>x</mi>
</msub>
<msub>
<mi>&sigma;</mi>
<mi>y</mi>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mi>exp</mi>
<mo>&lsqb;</mo>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<mrow>
<mo>(</mo>
<mfrac>
<msup>
<mi>x</mi>
<mrow>
<mo>&prime;</mo>
<mn>2</mn>
</mrow>
</msup>
<mrow>
<msup>
<msub>
<mi>&sigma;</mi>
<mi>x</mi>
</msub>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<msup>
<mi>y</mi>
<mrow>
<mo>&prime;</mo>
<mn>2</mn>
</mrow>
</msup>
<mrow>
<msup>
<msub>
<mi>&sigma;</mi>
<mi>y</mi>
</msub>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mn>2</mn>
<mi>&pi;</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>Wx</mi>
<mo>&prime;</mo>
</msup>
<mo>+</mo>
<msup>
<mi>Vy</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<mo>;</mo>
</mrow>
Wherein:
X '=aγ(xcos(ωΨ)+ysin(ωΨ));
Y '=aγ(-xsin(ωΨ)+ycos(ωΨ));
Ω be filter angle number, ω=0 ..., Ω -1, twiddle factor Ψ=π/Ω;Γ is the number of filter scale, γ=
0 ..., Γ -1, UhAnd UlDetermine the frequency range of Gabor filter group, W, V are frequency offset parameters, standard deviation sigmaxAnd σyPoint
Not Biao Shi wave filter in the bandwidth in x directions and y directions, scale factor a=(Uh/Ul)1/Γ-1;
Each texture maps are divided into equal m blocks, are calculated as follows each piece of texture entropy
<mrow>
<msubsup>
<mi>&zeta;</mi>
<mrow>
<mi>a</mi>
<mi>n</mi>
<mi>i</mi>
</mrow>
<mi>r</mi>
</msubsup>
<mo>=</mo>
<mo>-</mo>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mi>t</mi>
<mi>e</mi>
<mi>x</mi>
</mrow>
</munder>
<msubsup>
<mi>p</mi>
<mrow>
<mi>t</mi>
<mi>e</mi>
<mi>x</mi>
</mrow>
<mi>r</mi>
</msubsup>
<mi>log</mi>
<mi> </mi>
<msubsup>
<mi>p</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>x</mi>
</mrow>
<mi>r</mi>
</msubsup>
<mo>;</mo>
</mrow>
WhereinRepresent the probability distribution of texture value tex in r block texture maps;Patch textural characteristics
E) with I distance d respectively1, d2, d3, d4Place takes a voxel every 45 °, obtains [X1..., Xi..., X32], extract with Xi
Centered on 3 × 3 region, obtain [Y1..., Yi..., Y32], calculate YiGray averageTexture valueWith curvature averageMake the space contextual feature of the patch
F) construction feature vector P=[T1, T2, T3, T4], each characteristic vector only corresponds to a voxel;
G) sample S is madeiEigenmatrix Qi=[P1;…;Pα], α is RiIn the voxel number that includes, the eigenmatrix of training sample
V=[Q1;…;Qk], the eigenmatrix W=[P of target image1;…;Pβ], β is the voxel number included in M.
5. the mouse CT image kidney dividing methods based on random forest and statistical model as claimed in claim 1, its feature
It is, random forest is trained in the step 4 and completes specifically including for Target Segmentation:
(1) each label of voxel corresponding to characteristic vector is used as the defeated of random forest using in the eigenmatrix V and V of training sample
Enter, the parameter for optimizing random forest trains random forest;
(2) it is corresponding the eigenmatrix W of target image to be input to each characteristic vector obtained in the random forest trained in W
Voxel label, the segmentation result of kidney in target image.
6. a kind of usage right requires the mouse CT image kidneys based on random forest and statistical model described in 1~5 any one
The microcomputer tomographic system of dividing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710537961.XA CN107507189A (en) | 2017-07-04 | 2017-07-04 | Mouse CT image kidney dividing methods based on random forest and statistical model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710537961.XA CN107507189A (en) | 2017-07-04 | 2017-07-04 | Mouse CT image kidney dividing methods based on random forest and statistical model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107507189A true CN107507189A (en) | 2017-12-22 |
Family
ID=60678994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710537961.XA Pending CN107507189A (en) | 2017-07-04 | 2017-07-04 | Mouse CT image kidney dividing methods based on random forest and statistical model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107507189A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510507A (en) * | 2018-03-27 | 2018-09-07 | 哈尔滨理工大学 | A kind of 3D vertebra CT image active profile dividing methods of diffusion-weighted random forest |
CN109166133A (en) * | 2018-07-14 | 2019-01-08 | 西北大学 | Soft tissue organs image partition method based on critical point detection and deep learning |
CN111476292A (en) * | 2020-04-03 | 2020-07-31 | 北京全景德康医学影像诊断中心有限公司 | Small sample element learning training method for medical image classification processing artificial intelligence |
CN112258499A (en) * | 2020-11-10 | 2021-01-22 | 北京深睿博联科技有限责任公司 | Lymph node partition method, apparatus, device and computer readable storage medium |
WO2021109987A1 (en) * | 2019-12-06 | 2021-06-10 | 广州柏视医疗科技有限公司 | Electronic apparatus and non-transient computer-readable storage medium |
CN113139948A (en) * | 2021-04-28 | 2021-07-20 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Organ contour line quality evaluation method, device and system |
CN113436211A (en) * | 2021-08-03 | 2021-09-24 | 天津大学 | Medical image active contour segmentation method based on deep learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160045180A1 (en) * | 2014-08-18 | 2016-02-18 | Michael Kelm | Computer-Aided Analysis of Medical Images |
CN105719278A (en) * | 2016-01-13 | 2016-06-29 | 西北大学 | Organ auxiliary positioning segmentation method based on statistical deformation model |
CN106296653A (en) * | 2016-07-25 | 2017-01-04 | 浙江大学 | Brain CT image hemorrhagic areas dividing method based on semi-supervised learning and system |
CN106485695A (en) * | 2016-09-21 | 2017-03-08 | 西北大学 | Medical image Graph Cut dividing method based on statistical shape model |
-
2017
- 2017-07-04 CN CN201710537961.XA patent/CN107507189A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160045180A1 (en) * | 2014-08-18 | 2016-02-18 | Michael Kelm | Computer-Aided Analysis of Medical Images |
CN105719278A (en) * | 2016-01-13 | 2016-06-29 | 西北大学 | Organ auxiliary positioning segmentation method based on statistical deformation model |
CN106296653A (en) * | 2016-07-25 | 2017-01-04 | 浙江大学 | Brain CT image hemorrhagic areas dividing method based on semi-supervised learning and system |
CN106485695A (en) * | 2016-09-21 | 2017-03-08 | 西北大学 | Medical image Graph Cut dividing method based on statistical shape model |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510507A (en) * | 2018-03-27 | 2018-09-07 | 哈尔滨理工大学 | A kind of 3D vertebra CT image active profile dividing methods of diffusion-weighted random forest |
CN109166133A (en) * | 2018-07-14 | 2019-01-08 | 西北大学 | Soft tissue organs image partition method based on critical point detection and deep learning |
CN109166133B (en) * | 2018-07-14 | 2021-11-23 | 西北大学 | Soft tissue organ image segmentation method based on key point detection and deep learning |
WO2021109987A1 (en) * | 2019-12-06 | 2021-06-10 | 广州柏视医疗科技有限公司 | Electronic apparatus and non-transient computer-readable storage medium |
CN111476292A (en) * | 2020-04-03 | 2020-07-31 | 北京全景德康医学影像诊断中心有限公司 | Small sample element learning training method for medical image classification processing artificial intelligence |
CN112258499A (en) * | 2020-11-10 | 2021-01-22 | 北京深睿博联科技有限责任公司 | Lymph node partition method, apparatus, device and computer readable storage medium |
CN112258499B (en) * | 2020-11-10 | 2023-09-26 | 北京深睿博联科技有限责任公司 | Lymph node partition method, apparatus, device, and computer-readable storage medium |
CN113139948A (en) * | 2021-04-28 | 2021-07-20 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Organ contour line quality evaluation method, device and system |
CN113139948B (en) * | 2021-04-28 | 2023-06-27 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Organ contour line quality assessment method, device and system |
CN113436211A (en) * | 2021-08-03 | 2021-09-24 | 天津大学 | Medical image active contour segmentation method based on deep learning |
CN113436211B (en) * | 2021-08-03 | 2022-07-15 | 天津大学 | Medical image active contour segmentation method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107507189A (en) | Mouse CT image kidney dividing methods based on random forest and statistical model | |
US11776120B2 (en) | Method for predicting morphological changes of liver tumor after ablation based on deep learning | |
CN109166133B (en) | Soft tissue organ image segmentation method based on key point detection and deep learning | |
CN102890823B (en) | Motion object outline is extracted and left ventricle image partition method and device | |
CN107590809A (en) | Lung dividing method and medical image system | |
CN106991694B (en) | Based on marking area area matched heart CT and ultrasound image registration method | |
Zhang et al. | Review of breast cancer pathologigcal image processing | |
CN106934821A (en) | A kind of conical beam CT and CT method for registering images based on ICP algorithm and B-spline | |
CN102871686A (en) | Device and method for determining physiological parameters based on 3D (three-dimensional) medical images | |
CN105913432A (en) | Aorta extracting method and aorta extracting device based on CT sequence image | |
CN103295234B (en) | Based on the medical image segmentation system and method for deformation surface model | |
Heyde et al. | Anatomical image registration using volume conservation to assess cardiac deformation from 3D ultrasound recordings | |
Tan et al. | An approach for pulmonary vascular extraction from chest CT images | |
CN113570627A (en) | Training method of deep learning segmentation network and medical image segmentation method | |
CN115830016B (en) | Medical image registration model training method and equipment | |
CN112263217A (en) | Non-melanoma skin cancer pathological image lesion area detection method based on improved convolutional neural network | |
Ramasamy et al. | Machine learning in cyber physical systems for healthcare: brain tumor classification from MRI using transfer learning framework | |
Yin et al. | Automatic breast tissue segmentation in MRIs with morphology snake and deep denoiser training via extended Stein’s unbiased risk estimator | |
CN113222979A (en) | Multi-map-based automatic skull base foramen ovale segmentation method | |
CN104915989A (en) | CT image-based blood vessel three-dimensional segmentation method | |
CN108596900B (en) | Thyroid-associated ophthalmopathy medical image data processing device and method, computer-readable storage medium and terminal equipment | |
Smith et al. | Automated torso contour extraction from clinical cardiac MR slices for 3D torso reconstruction | |
CN111986216B (en) | RSG liver CT image interactive segmentation algorithm based on neural network improvement | |
Xu et al. | A pilot study to utilize a deep convolutional network to segment lungs with complex opacities | |
Lu et al. | Three-dimensional multimodal image non-rigid registration and fusion in a high intensity focused ultrasound system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171222 |
|
RJ01 | Rejection of invention patent application after publication |