CN109344833A - Medical image cutting method, segmenting system and computer readable storage medium - Google Patents
Medical image cutting method, segmenting system and computer readable storage medium Download PDFInfo
- Publication number
- CN109344833A CN109344833A CN201811024532.3A CN201811024532A CN109344833A CN 109344833 A CN109344833 A CN 109344833A CN 201811024532 A CN201811024532 A CN 201811024532A CN 109344833 A CN109344833 A CN 109344833A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- pixel
- tag
- loss function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Abstract
The disclosure provides a kind of medical image cutting method, segmenting system and computer readable storage medium, and described method includes following steps: establishing network model, is trained to obtain trained network model to network model;Using trained model to image level label and Pixel-level Tag Estimation, the accurate segmentation of image is finally realized using the Feature Mapping of the image level label and Pixel-level label of prediction and reconstructed image.Technical solution provided by the present application has the advantages that precisely to divide.
Description
Technical field
The present invention relates to computer and field of medical technology, and in particular to a kind of medical image cutting method, segmenting system
And computer readable storage medium.
Background technique
In medical procedure, accurate medical image segmentation can help doctor preferably to carry out diagnosing and treating to sufferer.
In the medical image segmentation model based on deep learning, the dependence to a large amount of Pixel-level labeled data is obvious.But
It is that the image of Pixel-level mark obtains difficulty, seriously limits the segmentation precision and generalization ability of model.It can use in practice
Training data be a small amount of Pixel-level mark sample and a large amount of low costs image levels mark samples.Image level mark sample lacks
Description to Pixel-level information can only extract the image level feature of image, be difficult to mention during using traditional supervised learning
Pixel-level feature is taken, so that training pattern is difficult to realize the Pixel-level segmentation of image.
Summary of the invention
The embodiment of the invention provides a kind of medical image cutting method, segmenting system and computer readable storage medium,
The Pixel-level segmentation of image may be implemented.
In a first aspect, the embodiment of the present invention provides a kind of medical image cutting method, described method includes following steps:
Network model is established, network model is trained to obtain trained network model;
Using trained model to image level label and Pixel-level Tag Estimation, the image level label of prediction is finally utilized
The accurate segmentation of image is realized with the Feature Mapping of Pixel-level label and reconstructed image.
A kind of medical image segmentation system, the system comprises:
Module is established, for establishing network model, network model is trained to obtain trained network model;
Prediction module, for utilizing trained model to image level label and Pixel-level Tag Estimation;
Divide module, it is real for the Feature Mapping of image level label and Pixel-level label using prediction and reconstructed image
The accurate segmentation of existing image.
The third aspect provides a kind of computer readable storage medium, and storage is used for the program of electronic data interchange,
In, described program makes terminal execute the method that first aspect provides.
Compared with prior art, scheme proposed by the present invention constructs image reconstruction module, extracts image reconstruction feature,
The deficiency for making up image level label information, reducing the uncertain of Pixel-level label influences.The present invention solves conventional images point
Ignore image personal characteristics problem in segmentation method, proposes the individualized learning frame combined based on common feature with personal characteristics
Frame improves the segmentation precision of pixel scale image.Scheme proposed by the invention continues with image reconstruction mould in test phase
Type carries out individualized learning, and analyzes its influence to segmentation precision, and personalized image segmentation is realized for image data.This
The network of patent disclosure excavates the mapping relations of image level label and Pixel-level label, auxiliary with other in blending image grade feature
It helps on the basis of feature, the Pixel-level segmentation problem of target image under the conditions of high-precision, can be solved.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a kind of techniqueflow chart of medical image cutting method.
Fig. 1 a is a kind of flow diagram of medical image cutting method provided by the present application.
Fig. 1 b is a kind of structural schematic diagram of medical image segmentation system provided by the present application.
Fig. 2 is the medical image segmentation network structure based on multi-task learning.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first ", " second ", " third " and " in the attached drawing
Four " etc. are not use to describe a particular order for distinguishing different objects.In addition, term " includes " and " having " and it
Any deformation, it is intended that cover and non-exclusive include.Such as it contains the process, method of a series of steps or units, be
System, product or equipment are not limited to listed step or unit, but optionally further comprising the step of not listing or list
Member, or optionally further comprising other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that the special characteristic, result or the characteristic that describe can wrap in conjunction with the embodiments
Containing at least one embodiment of the present invention.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
The complicated multiplicity of medical image, partitioning boundary are multifarious.It realizes the Accurate Segmentation to region in medical image, needs
The medical image data of a large amount of high quality, pixel tag is wanted to be trained parted pattern.However, the major part achieved in hospital
Image only has image tag, even without label.High-quality size medical image pixel tag needs practitioners to mark by hand, time-consuming
Arduously, very expensive, it is difficult to obtain on a large scale.In order to make full use of the medical image data of great amount of images label, image is improved
Segmentation precision, the medical image segmentation scheme based on paired-associate learning that we have proposed a kind of.
Currently, being mainly magnetic resonance imaging in the most common detection methods of hospital.Wherein, Diffusion Tensor Imaging
(diffusion tensor imaging, DTI) is a kind of newest detection technique, it can provide multi-signal to spinal cord by
It damages situation and carries out quantitative evaluation, while being able to reflect the change of spinal cord microstructure, it is with higher compared to traditional T2WI
Sensitivity.The means such as whether change by the way that whether detection area-of-interest (ROI) signal is abnormal and realize medical image
Diagnosis.The diagnosis that the precise degrees of region of interest regional partition directly affect.For improved diagnosis effect, this programme proposes one
The novel medical image segmentation algorithm of kind is for dividing area-of-interest.The model only needs image on a small quantity with Pixel-level label
The training of model is achieved that with a large amount of data with image level label.The program can reduce expensive pixel-level image mark
Work is infused, meanwhile, doctor can extract as image tag the description of image during diagnosis, thus the shadow of image level label
As data can obtain on a large scale.
A refering to fig. 1, Fig. 1 a provide a kind of medical image cutting method, refering to fig. 1 a, and the method includes walking as follows
It is rapid:
Step S101, network model is established, network model is trained to obtain trained network model;
It should be noted that the network model in above-mentioned steps S101 includes: basic network A and basic network B, the base
Plinth network A includes: depth residual error network and intensive convolutional neural networks;The basic network B includes: capsule network structure mould
Type.
Step S102, using trained model to image level label and Pixel-level Tag Estimation, prediction is finally utilized
The accurate segmentation of the Feature Mapping of image level label and Pixel-level label and reconstructed image realization image.
Scheme proposed by the present invention constructs image reconstruction module, extracts image reconstruction feature, makes up image level label letter
The deficiency of breath, reducing the uncertain of Pixel-level label influences.The present invention solves to ignore image in conventional images dividing method
Personal characteristics problem proposes the individualized learning frame combined based on common feature with personal characteristics, improves pixel scale
The segmentation precision of image.Scheme proposed by the invention continues with image reconstruction model in test phase and carries out individual character chemistry
It practises, and analyzes its influence to segmentation precision, personalized image segmentation is realized for image data.Net disclosed in this patent
Network excavates the mapping relations of image level label and Pixel-level label, on the basis of blending image grade feature and other supplemental characteristics
On, the Pixel-level segmentation problem of target image under the conditions of high-precision, can be solved.
Optionally, described in above-mentioned steps S101 is trained to obtain trained network model specific to network model
Include:
Utilize the Pixel-level labeled data { I of high qualityf, Lf, Tf, to by minimize loss function to basic network A,
Pixel tag predicts that network B, image tag prediction network C and image reconstruction network D carry out parameter regulation;Loss function includes picture
The corresponding loss function Loss of plain Tag Estimationmap(Bo, Lf), the corresponding loss function Loss of image tagtag(Co, Tf) and
The Euclidian loss function Loss of image reconstructionimg(Do, If);Wherein Bo, Co and Do respectively represent the Pixel-level mark of prediction
Label, image level label and reconstructed image;
Utilize the image level labeled data { I for being easier to obtainw, Tw, by minimizing image tag loss function and image
Further training of the loss function realization to model is reconstructed, it is real by integral image grade label information further after the completion of training
Now to Pixel-level label LwPrediction;After the Pixel-level label information for obtaining weak labeled data, the L of predictionwIt, will as true value
The image data of weak mark is converted into the image data marked entirely, is supervised using whole image datas to overall network model
Supervise and instruct and gets trained network model.
Optionally, above-mentioned supervised training includes:
According to the multistep training method of multi-task learning, so that the Parameters variation of network is smooth, loss function includes pixel
The softmax loss function Loss of Tag Estimationmap(Ao,Lf), the softmax loss function Loss of image tagtag(Bo,Tf)
With Euler's loss function Loss of image reconstructionimg(Co,If);
In the semi-supervised learning stage, we are finely adjusted model using the image data of all full labels and weak label.
All image datas are represented by { I, L, T }, wherein I={ If,Iw, L={ Lf,LwAnd T={ Tf,Tw, then whole network
Optimization problem can convert are as follows:
Wherein θ represents the parameter of whole network, α1,β1Respectively represent the output layer of DPN and the parameter of hidden layer A1 and B1.
B refering to fig. 1, Fig. 1 b provide a kind of medical image segmentation system, the system comprises:
Module 201 is established, for establishing network model, network model is trained to obtain trained network model;
Prediction module 202, for utilizing trained model to image level label and Pixel-level Tag Estimation;
Divide module 203, the feature for image level label and Pixel-level label using prediction and reconstructed image is reflected
Penetrate the accurate segmentation for realizing image.
Optionally, above-mentioned to establish module 201, specifically for the Pixel-level labeled data { I using high qualityf, Lf, Tf, it is right
Network B, image tag, which predict network C and image reconstruction net, to be predicted to basic network A, pixel tag by minimizing loss function
Network D carries out parameter regulation;Loss function includes that pixel tag predicts corresponding loss function Lossmap(Bo, Lf), image tag
Corresponding loss function Losstag(Co, Tf) and image reconstruction Euclidian loss function Lossimg(Do, If);Wherein
Bo, Co and Do respectively represent the Pixel-level label of prediction, image level label and reconstructed image;
Utilize the image level labeled data { I for being easier to obtainw, Tw, by minimizing image tag loss function and image
Further training of the loss function realization to model is reconstructed, it is real by integral image grade label information further after the completion of training
Now to the prediction of Pixel-level label Lw;After the Pixel-level label information for obtaining weak labeled data, the L of predictionwIt, will as true value
The image data of weak mark is converted into the image data marked entirely, is supervised using whole image datas to overall network model
Supervise and instruct and gets trained network model.
Application of the depth convolutional neural networks in image task has significant progress, depth residual error network (ResNet)
The application performance of deep neural network has further been expanded with the proposition of intensive convolutional neural networks (DenseNet).Binary channels
The advantages of network (DPN) incorporates ResNet and DenseNet is merged them as two channel networks, both real
New feature can be explored again by having showed the recycling to existing feature, and basic boom can indicate are as follows:
hk=gk(rk), (4)
Wherein, xkAnd ykRespectively represent the information that each channel is extracted in kth step, ft k() and vt() is feature
Learning function, equation (1) indicate that intensive interface channel allows channel to explore new feature, and equation (2) indicates in residual error channel
Common feature is reused, equation (3) merges the feature in dense channels with the feature in residual error channel, final spy
Sign conversion realization, h in equation (4)kRepresenting current state can be used for the mapping or prediction of next step.
In order to which using the data for there was only image level label, traditional method is directly using image tag come prediction pixel label
Realize image segmentation, but because image tag information contained is excessively simply difficult to restore more accurately pixel tag.It is based on
When the image data of the image data and a large amount of weak marks that mark entirely on a small quantity is split the training of model, the picture number of weak mark
According to image level label due to lacking the description to each pixel, thus be Pixel-level label to image level tag extension
When have biggish uncertainty.Existing research seldom considers this uncertain influence to Pixel-level segmentation.
Therefore, in order to make full use of picture information contained, we assist prediction pixel grade label using image level label
On the basis of be reconstructed input original image, and reconstructed picture is compared with input original image, advanced optimizes network model parameter,
To reduce dependence of the medical image segmentation to Pixel-level labeled data amount, number receipt collection cost is reduced, a small amount of Pixel-level is utilized
Image labeling data and great amount of images grade labeled data realize the Pixel-level segmentation of image.
Scheme proposed by the invention, used data set include a small amount of high quality pixel tag data set and a large amount of
Image tag data set constructs four network modules: being respectively basic network A, pixel tag predicts network B, image tag
Predict network C, image reconstruction network D.Basic content is divided into following four step:
1. network model building and initialization.Basic network A uses binary channels network, including depth residual error network and intensive
Convolutional neural networks.The two networks have more hidden layer, in order to obtain preferably initiation parameter, utilize common data
Collection carries out pre-training to it.Pixel tag predicts that network B uses capsule network structure model, extracts Pixel-level feature.Image mark
Label prediction network C uses convolutional neural networks, extracts image level feature, and carry out corresponding Tag Estimation.Network D is first to net
The output layer of network B and C are merged, and then, realize the reconstruct to image using multilayer convolutional neural networks.Compared to facilities network
For network A, the number of plies of network B, C and D is less, thus its parameter can be carried out random initializtion.
2. network model initial training.Utilize the Pixel-level labeled data { I of high qualityf, Lf, Tf, to by minimizing damage
It loses function and parameter is carried out to basic network A, pixel tag prediction network B, image tag prediction network C and image reconstruction network D
It adjusts.Loss function includes that pixel tag predicts corresponding loss function Lossmap(Bo, Lf), the corresponding loss of image tag
Function LosstagThe Euclidian loss function Loss of (Co, Tf) and image reconstructionimg(Do, If).Wherein Bo, Co and Do difference
Represent the Pixel-level label of prediction, image level label and reconstructed image.
3. Pixel-level Tag Estimation.In order to utilize the image level labeled data { I for being easier to obtainw, Tw, pass through minimum
Image tag loss function and image reconstruction loss function realize the further training to model.After the completion of training, pass through integration
Image level label information is realized to Pixel-level label LwPrediction.
4. being converted into full labeled data.After the Pixel-level label information for obtaining weak labeled data, the L of predictionwAs true
Value, converts the image data of weak mark to the image data marked entirely.Therefore, using whole image datas to overall network
Model exercises supervision training.All image datas are represented by { I, L, T }, wherein I={ If, Iw, L={ Lf, LwAnd T={ Tf,
Tw}.So the optimization problem of whole network can convert are as follows:
argminθ{LossmaP(Bo,L)+Losstag(Co,T)+Lossimg(Do,I))
Wherein θ represents the parameter of whole network.All-network model parameter can be realized by optimizing the above objective function
Optimization.
By the full label data of conversion, scheme updates network structure using the loss function for minimizing reconstructed image
Parameter, update are over after all-network parameter, so that it may the prediction that pixel tag is carried out to new picture, to realize image
Pixel scale segmentation.
For a small amount of full mark image { If, Lf, TfAnd a large amount of weak mark image { I for there was only image level labelw, Tw, this
Scheme of the invention proposes the personalized parted pattern based on Weakly supervised study.
The model includes four modules: basic network A, pixel tag prediction network B, image tag prediction network C and figure
As reconstructed network D.
Scheme mentioned by the present invention includes that techniqueflow is as shown in Figure 1.Including model construction stage, model training rank
Section and model measurement stage:
The model construction stage
This model uses multitask deep learning frame, by optimizing the training of three goal task implementation models and pre-
It surveys, output layer includes the output of image tag, the output of pixel tag and the output of reconstructed image.Each output layer difference
Corresponding three sub-networks, are abbreviated as A network, B network and C network, and three sub-networks construct on basic skeleton network again,
Model structure is as shown in Figure 2.
In order to which using the data for there was only image level label, traditional method is directly using image tag come prediction pixel label
Realize image segmentation, but because image tag information contained is excessively simply difficult to restore more accurately pixel tag.Therefore,
In order to make full use of picture information contained, we recycle image level label to assist reconstructing on the basis of prediction pixel grade label
Input original image, and reconstructed picture is compared with input original image, to advanced optimize network model parameter.In this way we
Three sub-networks are just constructed, A network, B network and C network in Fig. 2 are respectively corresponded.By the shared of basic skeleton network,
Back bone network parameter can be adjusted in each subtask.The network model of building is specifically described as follows:
1. using intensive convolutional network and residual error convolutional network in A network, new feature is explored in intensive connection, increases feature
Flexibility so that lesser network structure model realize complex characteristic space representative learning.Depth residual error network can weigh
Feature before multiple utilization, reduces model redundancy, reduces training difficulty.
2. B network uses capsule network mechanism.Capsule passes through the sample { I that marks to image levelw, TwCarry out convolution behaviour
Make acquisition characteristic image, includes convolutional layer, PrimaryCaps layers, DigitCaps layers and decoding layer, each Capsules representative one
Function, exports the vector of activation, and vector represents the Pixel-level segmentation tag that capsule is found.
3. C network structure is convolutional neural networks, convolutional neural networks structure includes: convolutional layer, down-sampled layer, complete to connect
Layer.Each layer has multiple characteristic patterns, and each characteristic pattern extracts a kind of feature of input by convolution filter, and each characteristic pattern has
Multiple neurons.By convolution algorithm, enhance original signal feature and reduce noise, extracts the image level mark of training data
Label.
Model training stage
Since data set is divided into the weak label image data of full label image data and only image level label, instructing
In experienced process, we are divided into three phases and are trained: 1. with the data of the Pixel-level label of high quality to network model into
Row pre-training (supervised learning);2. being then finely adjusted (semi-supervised learning) with whole data.3. the conversion of full label data.
The supervised learning stage:
In the full supervised learning training stage, we are carried out using the image data with full label to model and training,
Three steps of training process point:
2. training DPN, A network and B network simultaneously first with image tag and pixel tag;
2. the fixed above network parameter learnt learns the parameter of C network by reconstructed image;
3. the parameter of a network structure updates simultaneously.
According to the multistep training method of multi-task learning, so that the Parameters variation of network is smooth.Loss function includes pixel
The softmax loss function Loss of Tag Estimationmap(Ao,Lf), the softmax loss function Loss of image tagtag(Bo,Tf)
With Euler's loss function Loss of image reconstructionimg(Co,If)。
The semi-supervised learning stage:
In the semi-supervised learning stage, we are finely adjusted model using the image data of all full labels and weak label.
All image datas are represented by { I, L, T }, wherein I={ If,Iw, L={ Lf,LwAnd T={ Tf,Tw}.So whole network
Optimization problem can convert are as follows:
Wherein θ represents the parameter of whole network, α1,β1Respectively represent the output layer of DPN and the parameter of hidden layer A1 and B1.
The full label data transformation stage:
For full label data, the first part that we directly optimize equation can realize all-network model parameter
Fine tuning.For weak label image data, the pixel tag L of missingwIt can use image level label TwAuxiliary is inferred.Fix other
Parameter updates the parameter of two network structures of A1 and B1 with the loss function for minimizing reconstructed image, thus to pixel tag
It is predicted with image tag, using the pixel tag and image tag of prediction, updates all-network parameter.Update is over all
After network parameter, then pixel tag L is carried out to new picturewPrediction, complete the training stage of model.
The model measurement stage
The test phase of model passes through blending image grade label information, Pixel-level label based on trained model
Information can be realized the primary segmentation of medical image.The trained model can extract test data and concentrate to image segmentation
Common feature, have ignored the personal characteristics of different images data.Therefore, in order to enable segmentation is more accurate, it is necessary to take into account
The personal characteristics of test image.
In test phase by reconstruct input picture, and image reconstruction loss function is minimized, realized to the test image
Personalized network model learning, then recycle the model that succeeds in school to image level label and Pixel-level Tag Estimation, finally
The accurate segmentation of image is realized using the image level label and Pixel-level label of prediction and the Feature Mapping of reconstructed image.
The embodiment of the present invention also provides a kind of computer storage medium, wherein computer storage medium storage is for electricity
The computer program of subdata exchange, it is as any in recorded in above method embodiment which execute computer
A kind of some or all of medical image cutting method step.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, embodiment described in this description belongs to alternative embodiment, and related actions and modules is not necessarily of the invention
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way
It realizes.For example, the apparatus embodiments described above are merely exemplary, such as the division of the unit, it is only a kind of
Logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine or can
To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of device or unit,
It can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and sells or use as independent product
When, it can store in a computer-readable access to memory.Based on this understanding, technical solution of the present invention substantially or
Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products
Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment
(can be personal computer, server or network equipment etc.) executes all or part of each embodiment the method for the present invention
Step.And memory above-mentioned includes: USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory
The various media that can store program code such as (RAM, Random Access Memory), mobile hard disk, magnetic or disk.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can store in a computer-readable memory, memory
May include: flash disk, read-only memory (English: Read-Only Memory, referred to as: ROM), random access device (English:
Random Access Memory, referred to as: RAM), disk or CD etc..
The embodiment of the present invention has been described in detail above, specific case used herein to the principle of the present invention and
Embodiment is expounded, and the above description of the embodiment is only used to help understand the method for the present invention and its core ideas;
At the same time, for those skilled in the art can in specific embodiments and applications according to the thought of the present invention
There is change place, in conclusion the contents of this specification are not to be construed as limiting the invention.
Claims (9)
1. a kind of medical image cutting method, which is characterized in that described method includes following steps:
Network model is established, network model is trained to obtain trained network model;
Using trained model to image level label and Pixel-level Tag Estimation, the image level label and picture of prediction are finally utilized
The accurate segmentation of the Feature Mapping of plain grade label and reconstructed image realization image.
2. the method according to claim 1, wherein
The network model includes: basic network A and basic network B, the basic network A include: depth residual error network and close
Collect convolutional neural networks;The basic network B includes: capsule network structure model.
3. the method according to claim 1, wherein described be trained network model to obtain trained net
Network model specifically includes:
Utilize Pixel-level labeled data { If, Lf, TfBy minimize loss function to basic network, pixel tag predict network,
Image tag predicts that network and image reconstruction network carry out parameter regulation;Loss function includes that pixel tag predicts corresponding loss
Function Lossmap(Bo, Lf), the corresponding loss function Loss of image tagtah(Co, Tf) and image reconstruction Euclidian damage
Lose function Lossimg(Do, If);Wherein Bo, Co and Do respectively represent the Pixel-level label of prediction, image level label and reconstruct image
Picture;
Utilize image level labeled data { Iw, TwRealized by minimizing image tag loss function and image reconstruction loss function
Model is further trained, further after the completion of training, by integral image grade label information, is realized to Pixel-level label Lw's
Prediction;After the Pixel-level label information for obtaining weak labeled data, the L of predictionwAs true value, the image data of weak mark is turned
Turn to the image data marked entirely, using whole image datas to overall network model exercise supervision training obtain it is trained
Network model.
4. according to the method described in claim 3, the supervised training includes:
According to the multistep training method of multi-task learning, so that the Parameters variation of network is smooth, loss function includes pixel tag
The softmax loss function Loss of predictionmap(Ao,Lf), the softmax loss function Loss of image tagtag(Bo,Tf) and figure
As Euler's loss function Loss of reconstructimg(Co,If);
In the semi-supervised learning stage, we are finely adjusted model using the image data of all full labels and weak label.It is all
Image data table is shown as { I, L, T }, wherein I={ If,Iw, L={ Lf,LwAnd T={ Tf,Tw, then the optimization of whole network
Problem conversion are as follows:
Wherein θ represents the parameter of whole network, α1,β1Respectively represent the output layer of DPN and the parameter of hidden layer A1 and B1.
5. a kind of medical image segmentation system, which is characterized in that the system comprises:
Module is established, for establishing network model, network model is trained to obtain trained network model;
Prediction module, for utilizing trained model to image level label and Pixel-level Tag Estimation;
Divide module, realizes figure for the Feature Mapping of image level label and Pixel-level label using prediction and reconstructed image
The accurate segmentation of picture.
6. system according to claim 5, which is characterized in that
The network model includes: basic network A and basic network B, the basic network A include: depth residual error network and close
Collect convolutional neural networks;The basic network B includes: capsule network structure model.
7. system according to claim 5, which is characterized in that
It is described to establish module, it is specifically used for utilizing Pixel-level labeled data { If, Lf, TfBy minimizing loss function to basis
Network, pixel tag prediction network, image tag prediction network and image reconstruction network carry out parameter regulation;Loss function includes
Pixel tag predicts corresponding loss function Lossmap(Bo, Lf), the corresponding loss function Loss of image tagtag(Co, Tf)
With the Euclidian loss function Loss of image reconstructionimg(Do, If);Wherein Bo, Co and Do respectively represent the Pixel-level of prediction
Label, image level label and reconstructed image;
Utilize image level labeled data { Iw, TwRealized by minimizing image tag loss function and image reconstruction loss function
Model is further trained, further after the completion of training, by integral image grade label information, is realized to Pixel-level label Lw's
Prediction;After the Pixel-level label information for obtaining weak labeled data, the L of predictionwAs true value, the image data of weak mark is turned
Turn to the image data marked entirely, using whole image datas to overall network model exercise supervision training obtain it is trained
Network model.
8. system according to claim 5, the supervised training include:
According to the multistep training method of multi-task learning, so that the Parameters variation of network is smooth, loss function includes pixel tag
The softmax loss function Loss of predictionmap(Ao,Lf), the softmax loss function Loss of image tagtag(Bo,Tf) and figure
As Euler's loss function Loss of reconstructimg(Co,If);
In the semi-supervised learning stage, we are finely adjusted model using the image data of all full labels and weak label.It is all
Image data table is shown as { I, L, T }, wherein I={ If,Iw, L={ Lf,LwAnd T={ Tf,Tw, then the optimization of whole network
Problem conversion are as follows:
Wherein θ represents the parameter of whole network, α1,β1Respectively represent the output layer of DPN and the parameter of hidden layer A1 and B1.
9. a kind of computer readable storage medium, storage is used for the program of electronic data interchange, wherein described program makes end
End executes the method provided such as claim 1-4 any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811024532.3A CN109344833B (en) | 2018-09-04 | 2018-09-04 | Medical image segmentation method, segmentation system and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811024532.3A CN109344833B (en) | 2018-09-04 | 2018-09-04 | Medical image segmentation method, segmentation system and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109344833A true CN109344833A (en) | 2019-02-15 |
CN109344833B CN109344833B (en) | 2020-12-18 |
Family
ID=65292313
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811024532.3A Active CN109344833B (en) | 2018-09-04 | 2018-09-04 | Medical image segmentation method, segmentation system and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109344833B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109872333A (en) * | 2019-02-20 | 2019-06-11 | 腾讯科技(深圳)有限公司 | Medical image dividing method, device, computer equipment and storage medium |
CN109996073A (en) * | 2019-02-26 | 2019-07-09 | 山东师范大学 | A kind of method for compressing image, system, readable storage medium storing program for executing and computer equipment |
CN110009097A (en) * | 2019-04-17 | 2019-07-12 | 电子科技大学 | The image classification method of capsule residual error neural network, capsule residual error neural network |
CN110009034A (en) * | 2019-04-02 | 2019-07-12 | 中南大学 | Air-conditioner set type identifier method |
CN110136132A (en) * | 2019-05-24 | 2019-08-16 | 国网河北省电力有限公司沧州供电分公司 | Detection method, detection device and the terminal device of equipment heating failure |
CN110189323A (en) * | 2019-06-05 | 2019-08-30 | 深圳大学 | A kind of breast ultrasound image focus dividing method based on semi-supervised learning |
CN110378438A (en) * | 2019-08-07 | 2019-10-25 | 清华大学 | Training method, device and the relevant device of Image Segmentation Model under label is fault-tolerant |
CN110458852A (en) * | 2019-08-13 | 2019-11-15 | 四川大学 | Segmentation of lung parenchyma method, apparatus, equipment and storage medium based on capsule network |
CN110503654A (en) * | 2019-08-01 | 2019-11-26 | 中国科学院深圳先进技术研究院 | A kind of medical image cutting method, system and electronic equipment based on generation confrontation network |
CN110570394A (en) * | 2019-08-01 | 2019-12-13 | 深圳先进技术研究院 | medical image segmentation method, device, equipment and storage medium |
CN110766701A (en) * | 2019-10-31 | 2020-02-07 | 北京推想科技有限公司 | Network model training method and device, and region division method and device |
CN110796673A (en) * | 2019-10-31 | 2020-02-14 | Oppo广东移动通信有限公司 | Image segmentation method and related product |
CN110889816A (en) * | 2019-11-07 | 2020-03-17 | 北京量健智能科技有限公司 | Image segmentation method and device |
CN111681233A (en) * | 2020-06-11 | 2020-09-18 | 北京小白世纪网络科技有限公司 | US-CT image segmentation method, system and equipment based on deep neural network |
CN111931823A (en) * | 2020-07-16 | 2020-11-13 | 平安科技(深圳)有限公司 | Fine-grained image classification model processing method and device |
CN112085696A (en) * | 2020-07-24 | 2020-12-15 | 中国科学院深圳先进技术研究院 | Training method and segmentation method of medical image segmentation network model and related equipment |
WO2020253296A1 (en) * | 2019-06-19 | 2020-12-24 | 深圳Tcl新技术有限公司 | Image segmentation model training method, image segmentation method, medium and terminal |
CN113160243A (en) * | 2021-03-24 | 2021-07-23 | 联想(北京)有限公司 | Image segmentation method and electronic equipment |
CN114693925A (en) * | 2022-03-15 | 2022-07-01 | 平安科技(深圳)有限公司 | Image segmentation method and device, computer equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140270495A1 (en) * | 2013-03-14 | 2014-09-18 | Microsoft Corporation | Multiple Cluster Instance Learning for Image Classification |
US20150110368A1 (en) * | 2013-10-22 | 2015-04-23 | Eyenuk, Inc. | Systems and methods for processing retinal images for screening of diseases or abnormalities |
CN108062756A (en) * | 2018-01-29 | 2018-05-22 | 重庆理工大学 | Image, semantic dividing method based on the full convolutional network of depth and condition random field |
CN108229543A (en) * | 2017-12-22 | 2018-06-29 | 中国科学院深圳先进技术研究院 | Image classification design methods and device |
-
2018
- 2018-09-04 CN CN201811024532.3A patent/CN109344833B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140270495A1 (en) * | 2013-03-14 | 2014-09-18 | Microsoft Corporation | Multiple Cluster Instance Learning for Image Classification |
US20150110368A1 (en) * | 2013-10-22 | 2015-04-23 | Eyenuk, Inc. | Systems and methods for processing retinal images for screening of diseases or abnormalities |
CN108229543A (en) * | 2017-12-22 | 2018-06-29 | 中国科学院深圳先进技术研究院 | Image classification design methods and device |
CN108062756A (en) * | 2018-01-29 | 2018-05-22 | 重庆理工大学 | Image, semantic dividing method based on the full convolutional network of depth and condition random field |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109872333A (en) * | 2019-02-20 | 2019-06-11 | 腾讯科技(深圳)有限公司 | Medical image dividing method, device, computer equipment and storage medium |
CN109872333B (en) * | 2019-02-20 | 2021-07-06 | 腾讯科技(深圳)有限公司 | Medical image segmentation method, medical image segmentation device, computer equipment and storage medium |
US11854205B2 (en) | 2019-02-20 | 2023-12-26 | Tencent Technology (Shenzhen) Company Limited | Medical image segmentation method and apparatus, computer device, and storage medium |
CN109996073A (en) * | 2019-02-26 | 2019-07-09 | 山东师范大学 | A kind of method for compressing image, system, readable storage medium storing program for executing and computer equipment |
CN109996073B (en) * | 2019-02-26 | 2020-11-20 | 山东师范大学 | Image compression method, system, readable storage medium and computer equipment |
CN110009034A (en) * | 2019-04-02 | 2019-07-12 | 中南大学 | Air-conditioner set type identifier method |
CN110009097A (en) * | 2019-04-17 | 2019-07-12 | 电子科技大学 | The image classification method of capsule residual error neural network, capsule residual error neural network |
CN110136132A (en) * | 2019-05-24 | 2019-08-16 | 国网河北省电力有限公司沧州供电分公司 | Detection method, detection device and the terminal device of equipment heating failure |
CN110189323A (en) * | 2019-06-05 | 2019-08-30 | 深圳大学 | A kind of breast ultrasound image focus dividing method based on semi-supervised learning |
WO2020253296A1 (en) * | 2019-06-19 | 2020-12-24 | 深圳Tcl新技术有限公司 | Image segmentation model training method, image segmentation method, medium and terminal |
CN110503654A (en) * | 2019-08-01 | 2019-11-26 | 中国科学院深圳先进技术研究院 | A kind of medical image cutting method, system and electronic equipment based on generation confrontation network |
CN110570394A (en) * | 2019-08-01 | 2019-12-13 | 深圳先进技术研究院 | medical image segmentation method, device, equipment and storage medium |
CN110503654B (en) * | 2019-08-01 | 2022-04-26 | 中国科学院深圳先进技术研究院 | Medical image segmentation method and system based on generation countermeasure network and electronic equipment |
WO2021017168A1 (en) * | 2019-08-01 | 2021-02-04 | 深圳先进技术研究院 | Image segmentation method, apparatus, device, and storage medium |
CN110378438A (en) * | 2019-08-07 | 2019-10-25 | 清华大学 | Training method, device and the relevant device of Image Segmentation Model under label is fault-tolerant |
CN110458852A (en) * | 2019-08-13 | 2019-11-15 | 四川大学 | Segmentation of lung parenchyma method, apparatus, equipment and storage medium based on capsule network |
CN110796673A (en) * | 2019-10-31 | 2020-02-14 | Oppo广东移动通信有限公司 | Image segmentation method and related product |
CN110766701A (en) * | 2019-10-31 | 2020-02-07 | 北京推想科技有限公司 | Network model training method and device, and region division method and device |
CN110889816A (en) * | 2019-11-07 | 2020-03-17 | 北京量健智能科技有限公司 | Image segmentation method and device |
CN110889816B (en) * | 2019-11-07 | 2022-12-16 | 拜耳股份有限公司 | Image segmentation method and device |
CN111681233A (en) * | 2020-06-11 | 2020-09-18 | 北京小白世纪网络科技有限公司 | US-CT image segmentation method, system and equipment based on deep neural network |
CN111931823A (en) * | 2020-07-16 | 2020-11-13 | 平安科技(深圳)有限公司 | Fine-grained image classification model processing method and device |
CN112085696A (en) * | 2020-07-24 | 2020-12-15 | 中国科学院深圳先进技术研究院 | Training method and segmentation method of medical image segmentation network model and related equipment |
CN112085696B (en) * | 2020-07-24 | 2024-02-23 | 中国科学院深圳先进技术研究院 | Training method and segmentation method for medical image segmentation network model and related equipment |
CN113160243A (en) * | 2021-03-24 | 2021-07-23 | 联想(北京)有限公司 | Image segmentation method and electronic equipment |
CN114693925A (en) * | 2022-03-15 | 2022-07-01 | 平安科技(深圳)有限公司 | Image segmentation method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109344833B (en) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109344833A (en) | Medical image cutting method, segmenting system and computer readable storage medium | |
CN108596882B (en) | The recognition methods of pathological picture and device | |
Jia et al. | Constrained deep weak supervision for histopathology image segmentation | |
Oktay et al. | Multi-input cardiac image super-resolution using convolutional neural networks | |
Yang et al. | Class-balanced deep neural network for automatic ventricular structure segmentation | |
CN110475505A (en) | Utilize the automatic segmentation of full convolutional network | |
Zhao et al. | Multi-view semi-supervised 3D whole brain segmentation with a self-ensemble network | |
CN111402278B (en) | Segmentation model training method, image labeling method and related devices | |
CN112465827A (en) | Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation | |
CN110175998A (en) | Breast cancer image-recognizing method, device and medium based on multiple dimensioned deep learning | |
Wang et al. | SK-Unet: An improved U-Net model with selective kernel for the segmentation of multi-sequence cardiac MR | |
Cui et al. | Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation | |
Tao et al. | Segmentation of multimodal myocardial images using shape-transfer GAN | |
Khan et al. | Segmentation of shoulder muscle MRI using a new region and edge based deep auto-encoder | |
Wang et al. | Medical matting: a new perspective on medical segmentation with uncertainty | |
Zheng et al. | Semi-supervised segmentation with self-training based on quality estimation and refinement | |
CN111127487A (en) | Real-time multi-tissue medical image segmentation method | |
Dayarathna et al. | Deep learning based synthesis of MRI, CT and PET: Review and analysis | |
Sáenz-Gamboa et al. | Automatic semantic segmentation of the lumbar spine: Clinical applicability in a multi-parametric and multi-center study on magnetic resonance images | |
Liu et al. | GAN based unsupervised segmentation: should we match the exact number of objects | |
CN111814891A (en) | Medical image synthesis method, device and storage medium | |
Bigolin Lanfredi et al. | Interpretation of disease evidence for medical images using adversarial deformation fields | |
CN115409843B (en) | Brain nerve image feature extraction method based on scale equalization coupling convolution architecture | |
Pak et al. | Weakly supervised deep learning for aortic valve finite element mesh generation from 3D CT images | |
CN115908438A (en) | CT image focus segmentation method, system and equipment based on deep supervised ensemble learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |