CN108961274A - Automatic H/N tumors dividing method in a kind of MRI image - Google Patents
Automatic H/N tumors dividing method in a kind of MRI image Download PDFInfo
- Publication number
- CN108961274A CN108961274A CN201810730473.5A CN201810730473A CN108961274A CN 108961274 A CN108961274 A CN 108961274A CN 201810730473 A CN201810730473 A CN 201810730473A CN 108961274 A CN108961274 A CN 108961274A
- Authority
- CN
- China
- Prior art keywords
- mri image
- size
- image
- neural network
- tumors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The present invention discloses automatic H/N tumors dividing method in a kind of MRI image, comprising steps of neural network model of the training based on U-net: neural network model includes puncturing code device for analyze input MRI image and the extension decoder exported for generating label figure;The external appearance characteristic of shallow-layer coding layer is indicated to combine with the expression of the advanced features of depth decoding layer using jump connection in U-net framework;Divide NPC tumor region image in MRI image to be tested using the neural network model.The present invention can be realized quick, the steady and accurate segmentation NPC tumour in MRI image automatically.
Description
Technical field
The invention belongs to medical image technical fields, more particularly to H/N tumors segmentation side automatic in a kind of MRI image
Method.
Background technique
In head and neck neoplasm, nasopharyngeal carcinoma (NPC) is the most common type for leading to high mortality;Most of Nasopharyngeal Carcinoma Patients
Optimal treatment period has been had already passed by before diagnosis of nasopharyngeal carcinoma.Accurate tumour in magnetic resonance imaging (MRI) image is depicted in
It instructs to play a crucial role in radiotherapy.
Therefore, the early diagnosis of NPC is especially important in clinical application.Generally according to artificial segmentation and medical image analysis
NPC patient is diagnosed.Compared with other kinds of tumour such as brain tumor and lung neoplasm, NPC tumour has more complicated solution
Structure is cutd open, and usually there is similar intensity with surrounding tissue such as brain stem, cochlea, the parotid gland and lymph;In addition, from difference
The tumour of NPC patient typically exhibits high shape variability.These attributes make NPC tumour be partitioned into one especially have challenge
The task of property.
Since the MRI image with NPC usually has perceptual property similar with nasal region, view-based access control model feature
General pattern cutting techniques may be not suitable for distinguish MRI image in NPC borderline tumor.Perhaps it is partly because point
Challenge and the region distribution of nasopharyngeal carcinoma case are cut, but only has a small amount of document along this research direction and records.And it existing mentions
Algorithm is taken usually to extract one group of craft feature for lesion segmentation;However, since the change in shape of NPC tumour is very big, and
Similar to the intensity value of adjacent tissue, these methods may will limit segmentation performance.
Therefore, accurately segmentation NPC tumour extremely closes diagnosis and subsequent treatment plan with features such as volumes with determining propagate
It is important.However, the disagreement between labor-intensive due to dividing by hand and different radiologist, reduces the standard of separation
True property and robustness.
Summary of the invention
To solve the above-mentioned problems, the invention proposes H/N tumors dividing method automatic in a kind of MRI image, Neng Goushi
Now quick, the steady and accurate segmentation NPC tumour in MRI image automatically.
In order to achieve the above objectives, the technical solution adopted by the present invention is that: automatic H/N tumors segmentation side in a kind of MRI image
Method, comprising steps of
S100, neural network model of the training based on U-net: neural network model includes for analyzing input MRI image
Puncturing code device and for generate label figure output extension decoder;It is connected using jump by shallow-layer in U-net framework
The external appearance characteristic of coding layer indicates to combine with the expression of the advanced features of depth decoding layer;
The training neural network model based on U-net, comprising steps of
S101, carrying out data prediction and data to training set of images enhances;Data enhancing is random non-thread by application
Property transformation, more training datas are generated to improve network performance, to cope with the label NPC data for training of limited quantity;
S102, trains neural network by MRI image entire in training set of images, and is combined using jump connection point
Layer feature skips the use for connecting with while realizing good localization and context to generate label figure;
S103 combines neural network model of the enhanced data training based on U-net by label figure;
S200 divides NPC tumor region image, including step using the neural network model in MRI image to be tested
It is rapid: data acquisition, image preprocessing and segmentation NPC tumor region image are carried out to MRI image to be tested.
Further, the data acquisition is comprising steps of have the MRI image of T1 weighting by scanner acquisition, i.e.,
T1-MRI image;The T1-MRI image has the identical size and identical voxel size from head to neck.
Further, in view of NPC tumour only occupies lesser region, and the position of pharynx nasalis in acquired image
It is relatively fixed, described image pretreatment comprising steps of select the axial view of each MRI slice in the T1-MRI image as
Area-of-interest, area-of-interest is having a size of nasopharyngeal area size;Isotropism resampling is carried out to reach setting resolution ratio;School
Bias-field in positive MRI image, by subtracting the average value of T1 sequence and making the number of T1-MRI image divided by its standard deviation
It is normalized according to intensity.
Further, the T1-MRI image is obtained by Philips Achieva 3.0T scanner;The figure of acquisition
As having identical 232 × 320 × 103mm of size from head to neck3And identical voxel size 0.6061 × 0.6061
×0.8mm3;Selecting each MRI slice is 128 × 128mm as the size of area-of-interest in axial view3Nasopharynx area
Domain;Isotropism resampling is carried out to reach 1.0 × 1.0 × 1.0mm3Resolution ratio.
Further, different patients typically exhibits biggish tumour since nasopharyngeal carcinoma tumor does not have specific shape
Metamorphosis, Nonlinear Stochastic transformation use anamorphose;The data enhancing is comprising steps of pass through anamorphose processing mode
Obtain MRI image training label NPC data of different shapes;
Described image deformation is that the row and column of MRI image is divided into segment, to obtain in MRI image with identical
The box of size;The range of the borderline vertex representation anamorphose of box, all vertex in box boundary are controlled as source
Point, to obtain the target position at control point;Warping function is applied to each vertex in network, is obtained of different shapes
MRI image training label NPC data;The training data diversified enough with entirely different shape is generated, realizes MRI image
Data enhancing, with cope with limited quantity for training label NPC data, to improve network performance.
Further, the segmentation NPC tumour is comprising steps of every in the label mapping of the neural network model
The tumor region that a pixel indicates is labeled as 1 and normal region is labeled as 0;Connection is skipped used in the U-net framework will
Advanced features from extension decoding layer are combined with from the external appearance characteristic of puncturing code layer;Pass through the combination with layered characteristic
NPC tumor region in segmented image obtains NPC tumor region image.
Further, the neural network model based on U-net includes 28 convolutional layers;
The path of the encoder includes 5 convolution blocks, in each convolution block include size be 3 × 3 filters and
Step-length is 12 convolutional layers in each dimension, and is provided with ReLu activation primitive;At the 4th piece of encoder path and the 5th
Dropout layers are arranged in the last layer of block, dropout layers are set as 0.5;The Feature Mapping quantity of the encoder increases from 1
It is added to 1024;The end of each convolution block other than the last one block is provided with the filter and 2 × 2 that size is 2 × 2
The lower convolutional layer of step-length, so that the size of the Feature Mapping of each convolution block output is reduced to 8 × 8 from 128 × 128;
The path of the decoder includes 4 upper convolution blocks, and the upper convolution block is 2 from step-length in each dimension and filters
The upper convolutional layer that device size is 3 × 3 starts, this makes the size doubles of Feature Mapping in decoder, but the quantity of Feature Mapping
Halve, the size of Feature Mapping increases to 128 × 128 from 8 × 8 in decoder;It include 2 convolutional layers in the upper convolution block,
First convolutional layer in upper convolution block reduces the quantity of the Feature Mapping of connection;Feature Mapping from the encoder path
It is replicated and is connect with the Feature Mapping of decoder-path.
Further, 1 × 1 zero padding is applied in each convolutional layer of the neural network model based on U-net,
So that the output patch size of each convolutional layer are as follows:
Ioutput=(Iinput- F+2P)/S+1,
Wherein, IinputAnd IoutputIt is the patch size of convolutional layer output and input, F is filter size, and P representative is filled out
Size is filled, and S is step size;
According to the calculating of above-mentioned output patch size, so that retaining in each of encoder path and decoder-path piece
The Feature Mapping of same size.
Further, using 1 × 1 convolutional layer with by Feature Mapping in the neural network model based on U-net
Quantity be reduced to reflection label mapping Feature Mapping quantity, and make to export the model 0 to 1 using sigmoid function
In enclosing;The tumor region that each pixel in label mapping indicates is marked as 1 and normal region is marked as 0;In U-
Skipped used in net framework connection by from extension decoder convolutional layer advanced features with from puncturing code device convolutional layer
External appearance characteristic combines;Pass through the NPC tumor region in the combination segmented image with layered characteristic.
Further, in the training process, using binary system cross entropy as cost function, network uses stochastic gradient
Decline optimization is trained, so that cost function minimization relevant to its parameter.
Using the technical program the utility model has the advantages that
Method proposed by the present invention automatically extracts characteristic information using depth neuromechanism from training data, preferably catches
Obtain the relationship between MRI intensity image and respective labels figure;
The present invention is using entire MRI image rather than image block trains neural network as input;And in the present invention
In by skipping connection strategy for indicating and the high-level characteristic of depth decoding layer expression phase the external appearance characteristic of shallow-layer coding layer
In conjunction with;To by combining layered characteristic to can be realized better segmentation performance;It is quick, steady and accurate automatic to can be realized
Divide NPC tumour in MRI image.
Detailed description of the invention
Fig. 1 is automatic H/N tumors dividing method flow diagram in a kind of MRI image of the invention;
Fig. 2 is the schematic illustration of automatic H/N tumors dividing method in MRI image in the embodiment of the present invention;
Fig. 3 is image segmentation result schematic diagram in the embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, the present invention is made into one with reference to the accompanying drawing
Step illustrates.
In the present embodiment, referring to figure 1 and figure 2, the invention proposes H/N tumors automatic in a kind of MRI image point
Segmentation method, comprising steps of
S100, neural network model of the training based on U-net: neural network model includes for analyzing input MRI image
Puncturing code device and for generate label figure output extension decoder;It is connected using jump by shallow-layer in U-net framework
The external appearance characteristic of coding layer indicates to combine with the expression of the advanced features of depth decoding layer;
The training neural network model based on U-net, comprising steps of
S101, carrying out data prediction and data to training set of images enhances;Data enhancing is random non-thread by application
Property transformation, more training datas are generated to improve network performance, to cope with the label NPC data for training of limited quantity;
S102, trains neural network by MRI image entire in training set of images, and is combined using jump connection point
Layer feature skips the use for connecting with while realizing good localization and context to generate label figure;
S103 combines neural network model of the enhanced data training based on U-net by label figure;
S200 divides NPC tumor region image, including step using the neural network model in MRI image to be tested
It is rapid: data acquisition, image preprocessing and segmentation NPC tumor region image are carried out to MRI image to be tested.
As the optimization method of above-described embodiment, the data acquisition adds comprising steps of being acquired by scanner with T1
The MRI image of power, i.e. T1-MRI image;The T1-MRI image has identical size and identical body from head to neck
Plain size.
In view of NPC tumour only occupies lesser region in acquired image, and the position of pharynx nasalis is relatively fixed, institute
Image preprocessing is stated comprising steps of selecting the axial view of each MRI slice in the T1-MRI image as area-of-interest,
Area-of-interest is having a size of nasopharyngeal area size;Isotropism resampling is carried out to reach setting resolution ratio;It corrects in MRI image
Bias-field, by subtracting the average value of T1 sequence and making the intensity data normalizing of T1-MRI image divided by its standard deviation
Change.
The T1-MRI image is obtained by Philips Achieva 3.0T scanner;The image of acquisition has from head
To identical 232 × 320 × 103mm of size of neck3And identical 0.6061 × 0.6061 × 0.8mm of voxel size3;Selection
Each MRI slice is 128 × 128mm as the size of area-of-interest in axial view3Nasopharyngeal area;It carries out each to same
Property resampling is to reach 1.0 × 1.0 × 1.0mm3Resolution ratio.
Since nasopharyngeal carcinoma tumor does not have specific shape, different patients typically exhibits biggish shape of tumor variation, with
Machine nonlinear transformation uses anamorphose;The data enhancing is comprising steps of obtain not similar shape by anamorphose processing mode
The MRI image training label NPC data of shape;
Described image deformation is that the row and column of MRI image is divided into segment, to obtain in MRI image with identical
The box of size;The range of the borderline vertex representation anamorphose of box, all vertex in box boundary are controlled as source
Point, to obtain the target position at control point;Warping function is applied to each vertex in network, is obtained of different shapes
MRI image training label NPC data;The training data diversified enough with entirely different shape is generated, realizes MRI image
Data enhancing, with cope with limited quantity for training label NPC data, to improve network performance.
The segmentation NPC tumour is comprising steps of each pixel in the label mapping of the neural network model indicates
Tumor region labeled as 1 and normal region be labeled as 0;Connection is skipped used in the U-net framework will be from extension solution
The advanced features of code layer are combined with from the external appearance characteristic of puncturing code layer;By in the combination segmented image with layered characteristic
NPC tumor region, obtain NPC tumor region image.
As the optimization method of above-described embodiment, the neural network model based on U-net includes 28 convolutional layers;
The path of the encoder includes 5 convolution blocks, in each convolution block include size be 3 × 3 filters and
Step-length is 12 convolutional layers in each dimension, and is provided with ReLu activation primitive;At the 4th piece of encoder path and the 5th
Dropout layers are arranged in the last layer of block, dropout layers are set as 0.5;The Feature Mapping quantity of the encoder increases from 1
It is added to 1024;The end of each convolution block other than the last one block is provided with the filter and 2 × 2 that size is 2 × 2
The lower convolutional layer of step-length, so that the size of the Feature Mapping of each convolution block output is reduced to 8 × 8 from 128 × 128;
The path of the decoder includes 4 upper convolution blocks, and the upper convolution block is 2 from step-length in each dimension and filters
The upper convolutional layer that device size is 3 × 3 starts, this makes the size doubles of Feature Mapping in decoder, but the quantity of Feature Mapping
Halve, the size of Feature Mapping increases to 128 × 128 from 8 × 8 in decoder;It include 2 convolutional layers in the upper convolution block,
First convolutional layer in upper convolution block reduces the quantity of the Feature Mapping of connection;Feature Mapping from the encoder path
It is replicated and is connect with the Feature Mapping of decoder-path.
1 × 1 zero padding is applied in each convolutional layer of the neural network model based on U-net, so that each volume
The output patch size of lamination are as follows:
Ioutput=(Iinput- F+2P)/s+1,
Wherein, IinputAnd IoutputIt is the patch size of convolutional layer output and input, F is filter size, and P representative is filled out
Size is filled, and S is step size;
According to the calculating of above-mentioned output patch size, so that retaining in each of encoder path and decoder-path piece
The Feature Mapping of same size.
Using 1 × 1 convolutional layer the quantity of Feature Mapping to be reduced in the neural network model based on U-net
Reflect the quantity of the Feature Mapping of label mapping, and makes output in the range of 0 to 1 using sigmoid function;It is reflected in label
The tumor region that each pixel hit indicates is marked as 1 and normal region is marked as 0;It is used in U-net framework
Skip connection the advanced features from extension decoder convolutional layer are mutually tied with from the external appearance characteristic of puncturing code device convolutional layer
It closes;Pass through the NPC tumor region in the combination segmented image with layered characteristic.
In the training process, using binary system cross entropy as cost function, network using stochastic gradient descent optimize into
Row training, so that cost function minimization relevant to its parameter.
Itd is proposed method is verified by test:
In order to intuitively assess the segmentation performance of the method for the present invention, some example segmentation results are given, as shown in Figure 3.
The first row shows that the MRI intensity image of NPC main body, the second row correspond to the segmentation result of the method for the present invention, and the third line is shown
The manual segmentation result of radiologist.As can be seen, even if without any post-processing, our segmentation result nor
Very close to actual conditions, this show we method can in MRI image accurate Ground Split NPC tumour.
According to DSC, ASSD, PM and CR parameter value assesses my this case method.For each target voxel, its week is extracted in this case
The patch enclosed is as target patch;Then we have found identical position in each training sample, and define one with
Neighborhood centered on this voxel;Then, we extract patch identical with target patch size from each voxel in neighborhood
To form patch library;In view of the patch quantity in patch library is very big, small-sized dictionary is obtained by dictionary learning.Obtain dictionary
Afterwards, the label of target voxel can be obtained by solving corresponding rarefaction representation classification problem.Comparison result is as shown in table 1.
The assessment of 1 parameter value of table is compared
It will be seen that the method proposed has reached highest DSC and minimum ASSD from table 1, show that its is excellent
In other three kinds of methods.With include the method based on CNN, compared based on the method for FCN with the method proposed, the side based on DL
Method obtains worst segmentation performance, even if it reaches highest CR.This is because deep learning method is independent of hand-made
Feature, but learn the hierarchical structure of complex characteristic automatically from training data.Compared with the method based on CNN, we
The average DSC of method increases about 1.67%, and average ASSD reduces about 0.12mm.In addition, PM and CR value increases separately 3.36%
With 2.82%.Compared with the method based on FCN, obtains highest PM and than CNN method better performance, our method is averaged
DSC increases about 1.17%, and average ASSD reduces about 0.0048mm.In addition, CR value increases by 1.44%.
The results show superiority of deep neural network and in deep neural network using jump connection plan
Benefit slightly.
The above shows and describes the basic principles and main features of the present invention and the advantages of the present invention.The technology of the industry
Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the above embodiments and description only describe this
The principle of invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes
Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its
Equivalent thereof.
Claims (10)
1. automatic H/N tumors dividing method in a kind of MRI image, which is characterized in that comprising steps of
S100, neural network model of the training based on U-net: neural network model includes the receipts for analyzing input MRI image
Contracting encoder and the extension decoder exported for generating label figure;Shallow-layer is encoded using jump connection in U-net framework
The external appearance characteristic of layer indicates to combine with the expression of the advanced features of depth decoding layer;
The training neural network model based on U-net, comprising steps of
S101, carrying out data prediction and data to training set of images enhances;Data enhancing becomes to pass through using Nonlinear Stochastic
It changes;
S102 is trained neural network by MRI image entire in training set of images, and is combined layering spy using jump connection
Sign is to generate label figure;
S103 combines neural network model of the enhanced data training based on U-net by label figure;
S200 divides NPC tumor region image using the neural network model, comprising steps of right in MRI image to be tested
MRI image to be tested carries out data acquisition, image preprocessing and segmentation NPC tumor region image.
2. automatic H/N tumors dividing method in a kind of MRI image according to claim 1, which is characterized in that the number
According to acquisition comprising steps of having the MRI image of T1 weighting, i.e. T1-MRI image by scanner acquisition;The T1-MRI image
With the identical size and identical voxel size from head to neck.
3. automatic H/N tumors dividing method in a kind of MRI image according to claim 2, which is characterized in that the figure
As pretreatment comprising steps of selecting the axial view of each MRI slice in the T1-MRI image as area-of-interest, feel emerging
Interesting area size is nasopharyngeal area size;Isotropism resampling is carried out to reach setting resolution ratio;It corrects inclined in MRI image
Field is set, is normalized by the intensity data for subtracting the average value of T1 sequence and make T1-MRI image divided by its standard deviation.
4. automatic H/N tumors dividing method in a kind of MRI image according to claim 3, which is characterized in that the T1-
MRI image is obtained by Philips Achieva 3.0T scanner;The image of acquisition has the identical ruler from head to neck
Very little 232 × 320 × 103mm3And identical 0.6061 × 0.6061 × 0.8mm of voxel size3;Select each MRI slice in axis
In direction view as the size of area-of-interest be 128 × 128mm3Nasopharyngeal area;Isotropism resampling is carried out to reach
1.0×1.0×1.0mm3Resolution ratio.
5. automatic H/N tumors dividing method in a kind of MRI image according to claim 4, which is characterized in that the number
According to enhancing comprising steps of obtaining MRI image training label NPC data of different shapes by anamorphose processing mode;
Described image deformation is that the row and column of MRI image is divided into segment, so that obtaining in MRI image has same size
Box;The range of the borderline vertex representation anamorphose of box, all vertex in box boundary as source control point, from
And obtain the target position at control point;Warping function is applied to each vertex in network, obtains MRI image of different shapes
Training label NPC data.
6. automatic H/N tumors dividing method in a kind of MRI image according to claim 5, which is characterized in that described point
NPC tumour is cut comprising steps of the tumor region label that each pixel in the label mapping of the neural network model indicates
For 1 and normal region is labeled as 0;Connection is skipped used in the U-net framework will be from the advanced features of extension decoding layer
It is combined with from the external appearance characteristic of puncturing code layer;By the NPC tumor region in the combination segmented image with layered characteristic,
Obtain NPC tumor region image.
7. automatic H/N tumors dividing method, feature exist in a kind of any MRI image in -6 according to claim 1
In the neural network model based on U-net includes 28 convolutional layers;
The path of the encoder includes 5 convolution blocks, include size in each convolution block is 3 × 3 filters and each
Step-length is 12 convolutional layers in dimension, and is provided with ReLu activation primitive;At the 4th piece of encoder path and the 5th piece
Dropout layers are arranged in the last layer, dropout layers are set as 0.5;The Feature Mapping quantity of the encoder increases to from 1
1024;The end of each convolution block other than the last one block is provided with filter and 2 × 2 step-lengths that size is 2 × 2
Lower convolutional layer so that the size of Feature Mapping of each convolution block output is reduced to 8 × 8 from 128 × 128;
The path of the decoder includes 4 upper convolution blocks, and the upper convolution block is 2 from step-length in each dimension and filter is big
The small upper convolutional layer for being 3 × 3 starts, this makes the size doubles of Feature Mapping in decoder, but the quantity of Feature Mapping halves,
The size of Feature Mapping increases to 128 × 128 from 8 × 8 in decoder;It include 2 convolutional layers, upper convolution in the upper convolution block
First convolutional layer in block reduces the quantity of the Feature Mapping of connection;Feature Mapping from the encoder path is replicated
And it is connect with the Feature Mapping of decoder-path.
8. automatic H/N tumors dividing method in a kind of MRI image according to claim 7, which is characterized in that described
1 × 1 zero padding is applied in each convolutional layer of neural network model based on U-net, so that the output patch of each convolutional layer
Size are as follows:
Ioutput=(Iinput- F+2P)/S+1,
Wherein, IinputAnd IoutputIt is the patch size of convolutional layer output and input, F is filter size, and it is big that P represents filling
It is small, and S is step size;
According to the calculating of above-mentioned output patch size, so that retaining in each of encoder path and decoder-path piece identical
The Feature Mapping of size.
9. automatic H/N tumors dividing method in a kind of MRI image according to claim 8, which is characterized in that described
Using 1 × 1 convolutional layer the quantity of Feature Mapping is reduced to reflection label mapping in neural network model based on U-net
The quantity of Feature Mapping, and make output in the range of 0 to 1 using sigmoid function;Each pixel in label mapping
The tumor region of expression is marked as 1 and normal region is marked as 0;Connection is skipped used in the U-net framework in the future
It is combined from the advanced features of extension decoder convolutional layer with from the external appearance characteristic of puncturing code device convolutional layer;By having layering
NPC tumor region in the combination segmented image of feature.
10. automatic H/N tumors dividing method in a kind of MRI image according to claim 9, which is characterized in that in training
In the process, using binary system cross entropy as cost function, network is trained using stochastic gradient descent optimization, so that and its
The relevant cost function minimization of parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810730473.5A CN108961274B (en) | 2018-07-05 | 2018-07-05 | Automatic head and neck tumor segmentation method in MRI (magnetic resonance imaging) image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810730473.5A CN108961274B (en) | 2018-07-05 | 2018-07-05 | Automatic head and neck tumor segmentation method in MRI (magnetic resonance imaging) image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108961274A true CN108961274A (en) | 2018-12-07 |
CN108961274B CN108961274B (en) | 2021-03-02 |
Family
ID=64485908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810730473.5A Active CN108961274B (en) | 2018-07-05 | 2018-07-05 | Automatic head and neck tumor segmentation method in MRI (magnetic resonance imaging) image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108961274B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919954A (en) * | 2019-03-08 | 2019-06-21 | 广州视源电子科技股份有限公司 | The recognition methods of target object and device |
CN109961446A (en) * | 2019-03-27 | 2019-07-02 | 深圳视见医疗科技有限公司 | CT/MR three-dimensional image segmentation processing method, device, equipment and medium |
CN110009623A (en) * | 2019-04-10 | 2019-07-12 | 腾讯科技(深圳)有限公司 | A kind of image recognition model training and image-recognizing method, apparatus and system |
CN110992338A (en) * | 2019-11-28 | 2020-04-10 | 华中科技大学 | Primary stove transfer auxiliary diagnosis system |
CN111640118A (en) * | 2019-03-01 | 2020-09-08 | 西门子医疗有限公司 | Tumor tissue characterization using multi-parameter magnetic resonance imaging |
CN111784792A (en) * | 2020-06-30 | 2020-10-16 | 四川大学 | Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof |
CN113034461A (en) * | 2021-03-22 | 2021-06-25 | 中国科学院上海营养与健康研究所 | Pancreas tumor region image segmentation method and device and computer readable storage medium |
CN113192014A (en) * | 2021-04-16 | 2021-07-30 | 深圳市第二人民医院(深圳市转化医学研究院) | Training method, device, electronic equipment and medium for improving ventricle segmentation model |
CN114155215A (en) * | 2021-11-24 | 2022-03-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Nasopharyngeal carcinoma identification and tumor segmentation method and system based on MR image |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005010699A2 (en) * | 2003-07-15 | 2005-02-03 | Medical Metrx Solutions, Inc. | Generating a computer model using scan data of a patient |
CN102999917A (en) * | 2012-12-19 | 2013-03-27 | 中国科学院自动化研究所 | Cervical caner image automatic partition method based on T2-magnetic resonance imaging (MRI) and dispersion weighted (DW)-MRI |
CN104851101A (en) * | 2015-05-25 | 2015-08-19 | 电子科技大学 | Brain tumor automatic segmentation method based on deep learning |
CN104933711A (en) * | 2015-06-10 | 2015-09-23 | 南通大学 | Automatic fast segmenting method of tumor pathological image |
CN107220980A (en) * | 2017-05-25 | 2017-09-29 | 重庆理工大学 | A kind of MRI image brain tumor automatic division method based on full convolutional network |
US20180122082A1 (en) * | 2016-11-02 | 2018-05-03 | General Electric Company | Automated segmentation using deep learned priors |
-
2018
- 2018-07-05 CN CN201810730473.5A patent/CN108961274B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005010699A2 (en) * | 2003-07-15 | 2005-02-03 | Medical Metrx Solutions, Inc. | Generating a computer model using scan data of a patient |
CN102999917A (en) * | 2012-12-19 | 2013-03-27 | 中国科学院自动化研究所 | Cervical caner image automatic partition method based on T2-magnetic resonance imaging (MRI) and dispersion weighted (DW)-MRI |
CN104851101A (en) * | 2015-05-25 | 2015-08-19 | 电子科技大学 | Brain tumor automatic segmentation method based on deep learning |
CN104933711A (en) * | 2015-06-10 | 2015-09-23 | 南通大学 | Automatic fast segmenting method of tumor pathological image |
US20180122082A1 (en) * | 2016-11-02 | 2018-05-03 | General Electric Company | Automated segmentation using deep learned priors |
CN107220980A (en) * | 2017-05-25 | 2017-09-29 | 重庆理工大学 | A kind of MRI image brain tumor automatic division method based on full convolutional network |
Non-Patent Citations (5)
Title |
---|
CHRISTIAN LUCAS 等: "Multi-scale neural network for automatic segmentation of ischemic strokes on acute perfusion images", 《2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018)》 * |
HAO DONG 等: "Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks", 《 AUTOMATIC BRAIN TUMOR DETECTION AND SEGMENTATION》 * |
JONATHAN LONG 等: "Fully Convolutional Networks for Semantic Segmentation", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
XULEI YANG 等: "Deep convolutional neural networks forautomatic segmentation of left ventricle cavityfrom cardiac magnetic resonance images", 《IET COMPUTER VISION ( VOLUME: 11 , ISSUE: 8 , 12 2017 )》 * |
周鲁科 等: "基于U_net网络的肺部肿瘤图像分割算法研究", 《信息与电脑》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111640118B (en) * | 2019-03-01 | 2024-03-01 | 西门子医疗有限公司 | Tumor tissue characterization using multiparameter magnetic resonance imaging |
CN111640118A (en) * | 2019-03-01 | 2020-09-08 | 西门子医疗有限公司 | Tumor tissue characterization using multi-parameter magnetic resonance imaging |
US11969239B2 (en) | 2019-03-01 | 2024-04-30 | Siemens Healthineers Ag | Tumor tissue characterization using multi-parametric magnetic resonance imaging |
CN109919954A (en) * | 2019-03-08 | 2019-06-21 | 广州视源电子科技股份有限公司 | The recognition methods of target object and device |
CN109961446A (en) * | 2019-03-27 | 2019-07-02 | 深圳视见医疗科技有限公司 | CT/MR three-dimensional image segmentation processing method, device, equipment and medium |
CN109961446B (en) * | 2019-03-27 | 2021-06-01 | 深圳视见医疗科技有限公司 | CT/MR three-dimensional image segmentation processing method, device, equipment and medium |
CN110009623A (en) * | 2019-04-10 | 2019-07-12 | 腾讯科技(深圳)有限公司 | A kind of image recognition model training and image-recognizing method, apparatus and system |
US11967414B2 (en) | 2019-04-10 | 2024-04-23 | Tencent Technology (Shenzhen) Company Limited | Image recognition model training method and apparatus, and image recognition method, apparatus, and system |
CN110992338A (en) * | 2019-11-28 | 2020-04-10 | 华中科技大学 | Primary stove transfer auxiliary diagnosis system |
CN110992338B (en) * | 2019-11-28 | 2022-04-01 | 华中科技大学 | Primary stove transfer auxiliary diagnosis system |
CN111784792A (en) * | 2020-06-30 | 2020-10-16 | 四川大学 | Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof |
CN113034461A (en) * | 2021-03-22 | 2021-06-25 | 中国科学院上海营养与健康研究所 | Pancreas tumor region image segmentation method and device and computer readable storage medium |
CN113192014A (en) * | 2021-04-16 | 2021-07-30 | 深圳市第二人民医院(深圳市转化医学研究院) | Training method, device, electronic equipment and medium for improving ventricle segmentation model |
CN113192014B (en) * | 2021-04-16 | 2024-01-30 | 深圳市第二人民医院(深圳市转化医学研究院) | Training method and device for improving ventricle segmentation model, electronic equipment and medium |
CN114155215B (en) * | 2021-11-24 | 2023-11-10 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Nasopharyngeal carcinoma recognition and tumor segmentation method and system based on MR image |
CN114155215A (en) * | 2021-11-24 | 2022-03-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Nasopharyngeal carcinoma identification and tumor segmentation method and system based on MR image |
Also Published As
Publication number | Publication date |
---|---|
CN108961274B (en) | 2021-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961274A (en) | Automatic H/N tumors dividing method in a kind of MRI image | |
RU2720440C1 (en) | Image segmentation method using neural network | |
Chen et al. | Hippocampus segmentation through multi-view ensemble ConvNets | |
CN109993733A (en) | Detection method, system, storage medium, terminal and the display system of pulmonary lesions | |
Micallef et al. | Exploring the u-net++ model for automatic brain tumor segmentation | |
CN104834943A (en) | Brain tumor classification method based on deep learning | |
CN109741343A (en) | A kind of T1WI-fMRI image tumour collaboration dividing method divided based on 3D-Unet and graph theory | |
Wu et al. | The algorithm of watershed color image segmentation based on morphological gradient | |
CN109447963A (en) | A kind of method and device of brain phantom identification | |
Lyu et al. | Labeling lateral prefrontal sulci using spherical data augmentation and context-aware training | |
CN111080657A (en) | CT image organ segmentation method based on convolutional neural network multi-dimensional fusion | |
CN112862805B (en) | Automatic auditory neuroma image segmentation method and system | |
CN110110808A (en) | A kind of pair of image carries out the method, apparatus and computer readable medium of target mark | |
CN107133461A (en) | A kind of medical image processing devices and method based on self-encoding encoder | |
CN110008925A (en) | A kind of skin automatic testing method based on integrated study | |
Martins et al. | An adaptive probabilistic atlas for anomalous brain segmentation in MR images | |
Sun et al. | Hierarchical amortized training for memory-efficient high resolution 3D GAN | |
CN115311193A (en) | Abnormal brain image segmentation method and system based on double attention mechanism | |
Jiang et al. | Deep cross‐modality (MR‐CT) educed distillation learning for cone beam CT lung tumor segmentation | |
CN116664590B (en) | Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image | |
Ferreira et al. | GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy | |
CN117372458A (en) | Three-dimensional brain tumor segmentation method, device, computer equipment and storage medium | |
CN116188479B (en) | Hip joint image segmentation method and system based on deep learning | |
Li et al. | Sketch-supervised histopathology tumour segmentation: Dual CNN-transformer with global normalised CAM | |
CN116309615A (en) | Multi-mode MRI brain tumor image segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |