CN110070173A - A kind of deep neural network dividing method based on sub-pieces in length and breadth - Google Patents
A kind of deep neural network dividing method based on sub-pieces in length and breadth Download PDFInfo
- Publication number
- CN110070173A CN110070173A CN201910233170.7A CN201910233170A CN110070173A CN 110070173 A CN110070173 A CN 110070173A CN 201910233170 A CN201910233170 A CN 201910233170A CN 110070173 A CN110070173 A CN 110070173A
- Authority
- CN
- China
- Prior art keywords
- pieces
- submodel
- sub
- longitudinal
- lateral
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of deep neural network dividing methods based on sub-pieces in length and breadth.On the one hand, sub-pieces in length and breadth is proposed, is made of two orthogonal rectangle sub-pieces.Sub-pieces can be simultaneously from the global and local appearance information of vertical and horizontal capture target in length and breadth.On the other hand, using the sub-pieces in length and breadth of acquisition, two submodels are iteratively trained in a manner of cascading training.During the training period, the submodel in a direction is needed to help the erroneous segmentation region for dividing other direction submodel, to encourage the submodel of training to focus more on the part that target is difficult to divide.Final segmentation result is voted by all submodel majorities and is generated.The dividing method that the present invention discloses offer can not only capture local detail information and global context information simultaneously, it can also be enabled by the mode of cascade training and complementary information is provided mutually between different submodels and realizes self-promotion, to enhance final segmentation result, and there is high training effectiveness.
Description
Technical field
The present invention relates to nerual network technique field, more particularly to a kind of depth nerve based on sub-pieces in length and breadth
Network dividing method.
Background technique
Either in natural image still in medical image, with the presence of a large amount of non-elongated strip, similar round target,
Such as face, heart, kidney neoplasms, Lung neoplasm.Often have to the production and living in reality to their Accurate Segmentation extremely heavy
The meaning wanted, but these segmentation tasks usually all have very big challenge.Due to subjective (for example, incorrect description) and visitor
(for example, amount of images the is numerous) factor of sight, it is traditional that basic demand is no longer satisfied based on artificial Manual description, though
Right 3D model can make full use of three-dimensional information, but it is more demanding to memory, and the training time is also longer, hinders such method
Application in the work of the limited time.
Based on problem above, we have proposed a kind of depth convolutional neural networks (convolutional based on 2D
Neural network, CNN) model, to divide this non-elongated shape target.Compared with traditional image analysis method, base
In the feature that the model of CNN is acquired all there is superior performance in information content and in identification, is therefore widely used in
In image segmentation task.But as mentioned previously, existing some 2D CNN models can not obtain expection in particular task
Effect.Other than boundary fuzzy between target and background, similar texture hinder the performance of existing 2D CNN model performance,
It is considered that the data for being supplied to existing mode input layer are also the key factor for influencing model and playing.In existing 2D CNN mould
In type, either (image-based) based on image or (sub-pieces-based) based on sub-pieces, be all by whole image or
Square sub-pieces is input in model as the data of input layer.This strategy is possible to that the performance of model can be reduced.Such as Fig. 1
(b) shown in, if we are difficult to determine what model was acquired from them by blue therein and green sub-pieces while input model
Feature can distinguish nail and background.Certainly, sub-pieces can be extended to black box even whole image by we, but test knot
Fruit will indicate that this not and can increase the segmentation precision of model.Because although big sub-pieces can but be also implied that comprising more information
More noises.
Therefore, how the problem of new image partition method of one kind is those skilled in the art's urgent need to resolve is provided.
Summary of the invention
In view of this, the present invention provides a kind of deep neural network dividing methods based on sub-pieces in length and breadth, not only simultaneously
Capture local detail information and global context information;The mode of cascade training can be enabled and be provided mutually mutually between different submodels
Information is mended, to enhance final segmentation, and training effectiveness greatly improves.
Based on the analysis in background technique, the texture of target and upper will be fully described using the sub-pieces for being different from square
Context information.Still by taking the segmentation of nail as an example, target is in irregular symmetrical similar round.As shown in Fig. 1 (c), using rectangle
Entire target is completely covered in sub-pieces from one side to another side in one direction.Contextual information and right can both have been integrated simultaneously in this way
Claim information, and will not include excessive redundancy.It is similar with other CNN methods based on sub-pieces, it converts segmentation problem to
Two classification problems of pixel.For each pixel, two sub-pieces centered on it are extracted, one is longitudinal sub-pieces, another
A is lateral sub-pieces, is referred to as sub-pieces in length and breadth.New model will classify to the center pixel of each sub-pieces in length and breadth, differentiate that they are
Belong to target and still falls within background.
To achieve the goals above, the invention provides the following technical scheme:
A kind of deep neural network dividing method based on sub-pieces in length and breadth, specific steps include the following:
Step 1: the longitudinal sub-pieces and lateral sub-pieces of each pixel are extracted from image to be split;
Step 2: lateral submodel and longitudinal submodel are established, and constantly updates the lateral submodel and the longitudinal direction
Submodel;
Step 3: the longitudinal direction that longitudinal sub-pieces of the extraction in the step 1 is updated by the step 2
Submodel is trained;The transverse direction that the lateral sub-pieces of extraction in the step 1 is updated by the step 2
Submodel is trained;
Step 4: each submodel exports a segmentation result, and final result is voted by the majority of all submodel results
It generates.
Preferably, in a kind of above-mentioned deep neural network dividing method based on transverse and longitudinal sub-pieces, in the step 2,
The specific steps for establishing the lateral submodel and longitudinal submodel include:
Step 21: establishing the lateral submodel and longitudinal submodel;
Step 22: on the basis of the lateral submodel established in step 21 and longitudinal submodel constantly more
Newly, new lateral submodel and longitudinal submodel are just obtained per a more new round;
Step 23: the training error convergence until reaching maximum training round or every wheel.
Preferably, in a kind of above-mentioned deep neural network dividing method based on transverse and longitudinal sub-pieces, in the step 21,
The quantity of the number of plies of the transverse direction submodel and longitudinal submodel, core size and Feature Mapping passes through interior cross-validation experiments
Determining.
Preferably, in a kind of above-mentioned deep neural network dividing method based on transverse and longitudinal sub-pieces, in the step 22,
Specific steps include:
Step 221: extracting sufficiently longitudinally sub-pieces and lateral sub-pieces using basic sampling policy, be denoted as respectivelyWith
Step 222: utilizingWithTraining longitudinal direction submodel Vi-1With lateral submodel Hi-1And it constantly updates;Specifically
: (i > 1) will in the i-th wheel updatesInput lateral submodel Hi-1, with lateral submodel Hi-1For each lateral sub-pieces
Center pixel classification;And all pixels accidentally divided are recorded, the set of lateral sub-pieces is denoted asIt is referred to as accidentally subregion;
Step 223: while resampling and basis sampling are executed in accidentally subregion, obtain new longitudinal sub-pieces;Executing base
The sampling interval will not repeat with pervious round when plinth samples;
Step 224: with the longitudinal sub-pieces re -training longitudinal direction submodel V newly obtainedi-1To obtain new longitudinal submodel
Hi。
Step 225: re -training H in the same wayi-1To obtain Hi。
Preferably, in a kind of above-mentioned deep neural network dividing method based on transverse and longitudinal sub-pieces, first in segmentation mesh
Central point of the internal uniform sampling pixel as positive example sub-pieces is marked, the different training round sampling intervals is different, such as in difference
Row, different lines adopt one every three or five points, specific segmentation object concrete analysis.Negative example is selected outside segmentation object
The principle of piece central pixel point be it is closer from segmentation object select more intensive, remoter selects more sparse.Following way can be real
This existing target: using segmentation object center as the center of circle, diameter χiA series of circles on reconnaissance at different intervals, wherein
χi=(1- αi)r+αiR;
R is the inscribed circle diameter of segmentation object, and R is 1.5 times of α of the circumscribed circular diameter of segmentation objectiIt is parameter, has
Wherein, i=0,1 ... floor (r/2), β are constants, and value is determined by specific data set and experimental result, example
Such as dividing kidney neoplasms can be used 3.5~5.Finally, it removes and is divided outside the part of target coverage, in the rest part of these circles
One is adopted every certain point.Sampling interval is still segmentation object concrete analysis.Negative example is adjusted in the case where choosing positive example
Number, keep two type target numbers equal.
Preferably, in a kind of above-mentioned deep neural network dividing method based on transverse and longitudinal sub-pieces, the resampling root
According to the position for the center pixel classified in lateral submodel by mistake, longitudinal son is extracted by the way that mistake cut zone is completely covered
Piece: the central point of longitudinal sub-pieces is adopted on the center and the left and right sides of lateral sub-pieces respectively.
It can be seen via above technical scheme that compared with prior art, the present disclosure provides one kind based on sub in length and breadth
The deep neural network dividing method of piece not only captures local detail information and global context information simultaneously;Cascade training
Mode can enable and provide complementary information mutually between different submodels, and to enhance final segmentation, and training effectiveness mentions significantly
It is high.On the one hand, longitudinal son and lateral sub-pieces are proposed, is made of two orthogonal non-square sub-pieces.Sub-pieces can be same in length and breadth
When from vertical and horizontal direction capture target global and local appearance information.On the other hand, using the sub-pieces in length and breadth of acquisition, with
It cascades training method and iteratively trains two submodels (i.e. longitudinal submodel and lateral submodel).During the training period, instruction is encouraged
Experienced submodel focuses more on the difficult part (i.e. the region of erroneous segmentation) of target.In particular it is required that longitudinal (transverse direction) submodule
Type helps to divide the erroneous segmentation region of laterally (longitudinal direction) submodel.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 (a) attached drawing is the image to be split illustrated in background technique;
Fig. 1 (b) attached drawing is that positive direction sub-pieces divides schematic diagram in background technique;
Fig. 1 (c) attached drawing is that sub-pieces divides schematic diagram to the present invention in length and breadth;
Fig. 2 attached drawing is model framework schematic diagram of the invention;
Fig. 3 attached drawing is sampling schematic diagram in basis in the embodiment of the present invention;
Fig. 4 attached drawing is resampling schematic diagram in the embodiment of the present invention;
The training error line chart of Fig. 5 attached drawing each round when being individually trained submodel in length and breadth of the invention;
Fig. 6 attached drawing is performance schematic diagram of each submodel of the present invention in kidney neoplasms segmentation;
Fig. 7 attached drawing is DSC the and TPF value histogram of each submodel of the present invention;
Fig. 8 attached drawing is distribution map of the DSC of each method of the present invention in 600 test images;
Fig. 9 attached drawing is kidney neoplasms segmentation effect figure of the present invention;
Figure 10 attached drawing is tumor of breast segmentation effect figure of the present invention;
Figure 11 attached drawing is cardiac segmentation effect picture of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a kind of deep neural network dividing methods based on sub-pieces in length and breadth, not only capture simultaneously
Local detail information and global context information;The mode of cascade training, which can enable, provides mutually complementary letter between different submodels
Breath, to enhance final segmentation, and training effectiveness greatly improves.
Embodiment 1:
By taking the segmentation of kidney neoplasms as an example
1. sampling policy
Basis sampling: purpose is the region for making parted pattern focus more on segmentation object near border, this in practice by
It is considered to be difficult to divide.Therefore, the principle that we adhere to is the sub-pieces increased close to segmentation object and reduces far from segmentation mesh
The extra sub-pieces of target.The sampling policy is used for every wheel cascade training, but the sampling interval is different.By using the strategy, I
Can be selected a part in total pixel as in each sub-pieces according to the distance between current pixel and segmentation object center
Heart point: equably extracted in segmentation object region first in length and breadth sub-pieces as training sample (the i.e. segmentation object for belonging to positive example
Sub-pieces), then near segmentation object densely sampling, sparsely to sample negative example from the farther away place of segmentation object (i.e. non-
Segmentation object sub-pieces).As shown in Fig. 3 (b), black region is segmentation object, in order to avoid being overlapped, every three pictures in the region
Vegetarian refreshments selects one.The principle that negative example piece central pixel point is selected outside segmentation object is closer from segmentation object to select closeer
Collection, remoter selects more sparse.As shown in 3 (b), pixel is selected on the blue circle except segmentation object region, they are made
For the center pixel of non-segmentation target aspect sub-pieces.Blue circle closer from segmentation object region is more intensive, and remote is opposite.χiIt is blue
Color diameter of a circle, has
χi=(1- αi)r+αiR, (1)
R is the inscribed circle diameter of segmentation object, and R is 1.5 times of α of the circumscribed circular diameter of segmentation objectiIt is parameter, has
Wherein, i=0,1 ... floor (r/2), β are constants, and value is determined by specific data set and experimental result, example
Such as dividing kidney neoplasms can be used 3.5~5.Finally, it removes and is divided outside the part of target coverage, in the rest part of these circles
One is adopted every certain point.Sampling interval is still segmentation object concrete analysis.Negative example is adjusted in the case where choosing positive example
Number, keep two type target numbers equal.
Resampling: it is assumed that longitudinal sub- fourth is the erroneous segmentation region in the longitudinal submodel Vt of t wheel.Purpose
It is to borrow lateral submodel using lateral sub-pieces to carry out more preferable Ground Split to the region.Specifically, according to quilt in longitudinal submodel
The position of the center pixel of mistake classification, extracts lateral sub-pieces by the way that mistake cut zone is completely covered.In order to cover this area
The central point of lateral sub-pieces is adopted in domain on the center and the left and right sides of longitudinal sub-pieces respectively.In order to avoid adopting redundancy sub-pieces,
One is adopted every three pixels in each column.Lateral submodel can provide supplement for longitudinal submodel.In this manner, if
The same pixel is accidentally divided simultaneously by two submodels on same wheel, then region corresponding to this point will be added in next round
By force.Thus, the sub-pieces got by resampling and the sub-pieces got by basis sampling are together as training data, next wheel
The performance of model will be expected to better than the current performance for taking turns model.
2. submodel structure
The architecture of submodel is devised according to the Primary Study to CT kidney neoplasms data set.On the data set, layer
The quantity of number, core size and Feature Mapping is determined by interior cross-validation experiments.Substantially, longitudinal submodel and transverse direction
Submodel is all made of 8 convolutional layers, 2 maximum pond layers and 1 softmax layers.The size of sub-pieces is respectively in length and breadth
100X20 and 20X100. loss function is softmax function, is optimized using stochastic gradient descent method.The structure of model is not
Be it is nonadjustable, can according to segmentation task difference and change.
3. specific training and test process
The submodel in length and breadth of i-th wheel is denoted as V respectivelyiAnd Hi;
Firstly, extracting enough sub-pieces in length and breadth using basic sampling policy, it is denoted as respectivelyWithThen it uses
WithSubmodel in length and breadth is trained, V is obtained1And H1.
Secondly, constantly updating on the basis of existing submodel, new longitudinal submodel and cross are just obtained per a more new round
To submodel.Specifically, in the i-th wheel updates (i > 1):
Assess Hi-1Performance.It willInput Hi-1, classified with the center pixel that the model is each lateral sub-pieces.Record
All pixels accidentally divided, the lateral sub-pieces corresponding to them are the region accidentally divided, by the set note of these lateral sub-pieces
ForIt is referred to as accidentally subregion.
It is performed simultaneously resampling (on accidentally subregion) and basis sampling, obtains new longitudinal sub-pieces.It is adopted executing basis
The sampling interval will not repeat with pervious round when sample.Meanwhile the sub-pieces adopted in resampling will in the sampling of basis
It will not be sampled again.In this way, status and all sub-pieces of the last round of mistake subregion in epicycle can be strengthened whole
Distribution in a sample space excessively tends to accidentally subregion to avoid model, while also can be reduced redundancy.
With the longitudinal sub-pieces re -training V newly obtainedi-1To obtain Vi。
Re -training H in the same wayi-1To obtain Hi。
Finally, repeating above step constantly updates submodel, the instruction until reaching maximum training round or every wheel submodel
Practice error convergence.
In test phase, for new image, we extract the sub-pieces in length and breadth of each pixel first.Then, this is a little
Piece is input in training stage obtained all longitudinal submodels and lateral submodel, to predict the middle imago of each sub-pieces
Whether element belongs to tumor region.Each submodel exports a segmentation result, final result by all submodel results majority
Ballot generates.In form, it is assumed that T is most bull wheel number, and available 2T submodel is V1;…;VTAnd H1;…;HT.As a result
It is determined by 2T submodel.
4. experimental result
4.1 data set introductions
Kidney neoplasms data set.The data set is independently collected by Suzhou Science and Technology City hospital.By 3,500 CT of 94 subjects
Slice is used for Performance Evaluation, one tumour of each slice.Each image is 512*512, and resolution ratio is 1*1mm/ pixel, is sliced it
Between spacing be 1mm.For each image, the diameter range of tumour is from 7 pixels to 90 pixels, and tumour is by doctor's hand
Dynamic annotation is used as the goldstandard of training.Data set is randomly divided into three subsets, i.e. training set, verifying collection and test set, each
Subset respectively by 50,8 and 36 subject groups at.
Tumor of breast X-ray data collection INBreast.Image in INbreast has the artificial mark of high quality, wide
It is general to be used to evaluate many models.This data set provides 116 quality images, every image size be 3328 × 4084 or
2560 × 3328 pixels.The size of sub-pieces is respectively 340X60 and 60X340 in length and breadth.
Heart MR data set.The data set includes the cardiac MRI sequence of 33 subjects, in total 7,980 MR slices.Figure
As resolution ratio is 256*256, Pixel Dimensions are 0.9-1.6mm in piece, with a thickness of 6-13mm between piece.It provides in each image
The internal membrane of heart of left ventricle (LV) and the goldstandard of epicardial contours.We randomly choose 20 and 3 subjects as training and test
Card collection is to train submodel.The image of remaining 10 subjects forms test data set with the performance of measurement model.It is in length and breadth
The size of sub-pieces is identical as kidney neoplasms data set.
The signature verification of 4.2 new models
The self-promotion of model.The purpose of the experiment is to show that longitudinal submodel can not be related to the feelings of lateral submodel
Self improvement under condition, vice versa.Individually train longitudinal submodel to obtain V first1.Then, again using resampling strategy
The region of sample error segmentation, but the sub-pieces of resampling is longitudinal sub-pieces, rather than lateral sub-pieces.Then, with longitudinal sub-pieces
The sub-pieces training V obtained with basic sampling policy1Obtain V2, then obtain V3, V4....Repetition training submodel 10 times
Training segmentation error convergence afterwards.Lateral submodel is not engaged in the whole process.Also using identical on lateral submodel
Process.As shown in figure 5, the training error of each submodel reduces with the increase of round.Although the experiment is also shown that each
Submodel can self improvement, but vertical and horizontal submodel is respectively necessary for 10 wheels and could restrain.If in the manner shown in figure 2
Training submodel rather than by it is this it is individual in a manner of training submodel then only need 3 wheels.
The training error of the longitudinal submodel of independent training and round each when lateral submodel as shown in Figure 5;
The complementarity of submodel, can be by another side as shown in fig. 6, when the model segmentation effect in a direction is bad
To make up.
The shape of sub-pieces influences and the effect of most ballots is as shown in fig. 7, the performance of sub-pieces is substantially better than square in length and breadth
Sub-pieces is both the result of most ballots significantly better than the effect that directly last two submodels are superimposed.
The comparison of 4.3 new methods and other methods in kidney neoplasms segmentation
The comparison of 1 each method of table
The comparison of 4.4 new methods and other methods in tumor of breast segmentation
The DSC of 2 each method of table is compared
Method | Cross-sensor | AM-FCN | Method | New method |
0.8700 | 0.9000 | 0.9130 | 0.9118 | 0.9122 |
4.5 new methods and comparison of other methods in cardiac segmentation
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.For device disclosed in embodiment
For, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is said referring to method part
It is bright.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest scope of cause.
Claims (6)
1. a kind of deep neural network dividing method based on sub-pieces in length and breadth, which is characterized in that specific steps include the following:
Step 1: the longitudinal sub-pieces and lateral sub-pieces of pixel are extracted from image to be split;
Step 2: establishing lateral submodel and longitudinal submodel, and constantly updates the lateral submodel and longitudinal submodule
Type;
Step 3: longitudinal submodule that longitudinal sub-pieces of the extraction in the step 1 is updated by the step 2
Type is trained;The lateral submodule that the lateral sub-pieces of extraction in the step 1 is updated by the step 2
Type is trained;
Step 4: each submodel exports a segmentation result, and final result is generated by most ballots of all submodel results.
2. a kind of deep neural network dividing method based on transverse and longitudinal sub-pieces according to claim 1, which is characterized in that institute
It states in step 2, the specific steps for establishing the lateral submodel and longitudinal submodel include:
Step 21: establishing the lateral submodel and longitudinal submodel;
Step 22: being constantly updated on the basis of the lateral submodel established in step 21 and longitudinal submodel, often
A more new round just obtains new lateral submodel and longitudinal submodel;
Step 23: the training error convergence until reaching maximum training round or every wheel.
3. a kind of deep neural network dividing method based on transverse and longitudinal sub-pieces according to claim 2, which is characterized in that institute
It states in step 21, the quantity of the number of plies of the transverse direction submodel and longitudinal submodel, core size and Feature Mapping passes through interior
What cross-validation experiments determined.
4. a kind of deep neural network dividing method based on transverse and longitudinal sub-pieces according to claim 2, which is characterized in that institute
It states in step 22, specific steps include:
Step 221: extracting longitudinal sub-pieces and lateral sub-pieces using basic sampling policy, be denoted as respectivelyWith
Step 222: utilizingWithTraining longitudinal direction submodel Vi-1With lateral submodel Hi-1And it constantly updates;It is specific:
(i > 1) will in the i-th wheel updatesInput lateral submodel Hi-1, with lateral submodel Hi-1For the center of each lateral sub-pieces
Pixel classifications;And all pixels accidentally divided are recorded, the set of lateral sub-pieces is denoted asIt is referred to as accidentally subregion;
Step 223: while resampling and basis sampling are executed in accidentally subregion, obtain new longitudinal sub-pieces;It is adopted executing basis
The sampling interval will not repeat with pervious round when sample;
Step 224: with the longitudinal sub-pieces re -training longitudinal direction submodel V newly obtainedi-1To obtain new longitudinal submodel Hi。
Step 225: re -training H in the same wayi-1To obtain Hi。
5. a kind of deep neural network dividing method based on transverse and longitudinal sub-pieces according to claim 4, which is characterized in that institute
State basic sampling policy are as follows: central point of the uniform sampling pixel as positive example sub-pieces first inside segmentation object, it is different
The training round sampling interval is different;Selected outside segmentation object negative example piece central pixel point segmentation object it is closer select closeer
Collection, remoter selects more sparse;Using segmentation object center as the center of circle, diameter χiA series of circles on select at different intervals
Point, wherein χi=(1- αi)r+αiR;
R is the inscribed circle diameter of segmentation object, and R is 1.5 times of α of the circumscribed circular diameter of segmentation objectiIt is parameter, has
Wherein, i=0,1 ... floor (r/2), β are constants, and value is determined by specific data set and experimental result;Finally, it removes
It goes outside the part for being divided target coverage, adopts one every certain point in the rest part of these circles;In the feelings for choosing positive example
The number that negative example is adjusted under condition keeps two type target numbers equal.
6. a kind of deep neural network dividing method based on transverse and longitudinal sub-pieces according to claim 4, which is characterized in that institute
Resampling is stated according to the position for the center pixel classified in lateral submodel by mistake, is mentioned by the way that mistake cut zone is completely covered
It takes longitudinal sub-pieces: adopting the central point of longitudinal sub-pieces on the center and the left and right sides of lateral sub-pieces respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910233170.7A CN110070173A (en) | 2019-03-26 | 2019-03-26 | A kind of deep neural network dividing method based on sub-pieces in length and breadth |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910233170.7A CN110070173A (en) | 2019-03-26 | 2019-03-26 | A kind of deep neural network dividing method based on sub-pieces in length and breadth |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110070173A true CN110070173A (en) | 2019-07-30 |
Family
ID=67366685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910233170.7A Pending CN110070173A (en) | 2019-03-26 | 2019-03-26 | A kind of deep neural network dividing method based on sub-pieces in length and breadth |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110070173A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853021A (en) * | 2019-11-13 | 2020-02-28 | 江苏迪赛特医疗科技有限公司 | Construction of detection classification model of pathological squamous epithelial cells |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767378A (en) * | 2017-11-13 | 2018-03-06 | 浙江中医药大学 | The multi-modal Magnetic Resonance Image Segmentation methods of GBM based on deep neural network |
CN108319972A (en) * | 2018-01-18 | 2018-07-24 | 南京师范大学 | A kind of end-to-end difference online learning methods for image, semantic segmentation |
CN109145939A (en) * | 2018-07-02 | 2019-01-04 | 南京师范大学 | A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity |
-
2019
- 2019-03-26 CN CN201910233170.7A patent/CN110070173A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767378A (en) * | 2017-11-13 | 2018-03-06 | 浙江中医药大学 | The multi-modal Magnetic Resonance Image Segmentation methods of GBM based on deep neural network |
CN108319972A (en) * | 2018-01-18 | 2018-07-24 | 南京师范大学 | A kind of end-to-end difference online learning methods for image, semantic segmentation |
CN109145939A (en) * | 2018-07-02 | 2019-01-04 | 南京师范大学 | A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity |
Non-Patent Citations (1)
Title |
---|
QIAN YU: "Crossbar-Net A Novel Convolutional Network for Kidney Tumor Segmentation in CT Images", 《IEEE》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853021A (en) * | 2019-11-13 | 2020-02-28 | 江苏迪赛特医疗科技有限公司 | Construction of detection classification model of pathological squamous epithelial cells |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105957063B (en) | CT image liver segmentation method and system based on multiple dimensioned weighting similarity measure | |
CN110070935B (en) | Medical image synthesis method, classification method and device based on antagonistic neural network | |
CN105869173B (en) | A kind of stereoscopic vision conspicuousness detection method | |
CN109741346A (en) | Area-of-interest exacting method, device, equipment and storage medium | |
CN105654121B (en) | A kind of complicated jacquard fabric defect inspection method based on deep learning | |
CN108257135A (en) | The assistant diagnosis system of medical image features is understood based on deep learning method | |
CN108961274B (en) | Automatic head and neck tumor segmentation method in MRI (magnetic resonance imaging) image | |
CN102324109B (en) | Method for three-dimensionally segmenting insubstantial pulmonary nodule based on fuzzy membership model | |
CN105096310B (en) | Divide the method and system of liver in magnetic resonance image using multi-channel feature | |
CN109767440A (en) | A kind of imaged image data extending method towards deep learning model training and study | |
CN107730507A (en) | A kind of lesion region automatic division method based on deep learning | |
CN103249358B (en) | Medical image-processing apparatus | |
CN109389584A (en) | Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN | |
CN107977952A (en) | Medical image cutting method and device | |
CN109872325B (en) | Full-automatic liver tumor segmentation method based on two-way three-dimensional convolutional neural network | |
CN108062749B (en) | Identification method and device for levator ani fissure hole and electronic equipment | |
CN110310287A (en) | It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium | |
CN110889852A (en) | Liver segmentation method based on residual error-attention deep neural network | |
Du et al. | Identification of COPD from multi-view snapshots of 3D lung airway tree via deep CNN | |
CN103562960B (en) | For generating the assigned unit between the image-region of image and element class | |
CN108447063A (en) | The multi-modal nuclear magnetic resonance image dividing method of Gliblastoma | |
DE102018108310A1 (en) | Image processing apparatus, image processing method and image processing program | |
CN110599499B (en) | MRI image heart structure segmentation method based on multipath convolutional neural network | |
CN112418337B (en) | Multi-feature fusion data classification method based on brain function hyper-network model | |
CN109063713A (en) | A kind of timber discrimination method and system based on the study of construction feature picture depth |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190730 |