CN109300136A - It is a kind of to jeopardize organs automatic segmentation method based on convolutional neural networks - Google Patents

It is a kind of to jeopardize organs automatic segmentation method based on convolutional neural networks Download PDF

Info

Publication number
CN109300136A
CN109300136A CN201810991434.0A CN201810991434A CN109300136A CN 109300136 A CN109300136 A CN 109300136A CN 201810991434 A CN201810991434 A CN 201810991434A CN 109300136 A CN109300136 A CN 109300136A
Authority
CN
China
Prior art keywords
data
image
neural networks
convolutional neural
jeopardize
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810991434.0A
Other languages
Chinese (zh)
Other versions
CN109300136B (en
Inventor
叶方焱
毛顺亿
浦剑
胡仲华
周建华
孙谷飞
王文化
石峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhongan Information Technology Service Co ltd
Original Assignee
Zhongan Information Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongan Information Technology Service Co Ltd filed Critical Zhongan Information Technology Service Co Ltd
Priority to CN201810991434.0A priority Critical patent/CN109300136B/en
Publication of CN109300136A publication Critical patent/CN109300136A/en
Application granted granted Critical
Publication of CN109300136B publication Critical patent/CN109300136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a kind of to jeopardize organs automatic segmentation method based on convolutional neural networks, belongs to technical field of image processing.The described method includes: S1: obtaining CT picture of patient data and corresponding labeled data;S2: the CT graph data and corresponding labeled data are pre-processed;S3: establishing 3D convolutional neural networks model, and input block obtains the prediction result image of model output;S4: the prediction result image of the output of 3D convolutional neural networks model described in optimization processing.The present invention is based only upon CT image data, it is smaller that initial data obtains difficulty, application range is wider, and it can realize the automatic segmentation for jeopardizing organ in CT images, cutting procedure is not necessarily to artificial interference, segmentation efficiency and segmentation result precision are effectively improved, increases post-processing operation, segmentation result is advanced optimized.

Description

It is a kind of to jeopardize organs automatic segmentation method based on convolutional neural networks
Technical field
The present invention relates to technical field of image processing, in particular to it is a kind of based on convolutional neural networks to jeopardize organ automatic Dividing method.
Background technique
Malignant tumour, i.e., it has often been said that cancer, as soon as be currently a global disease difficult to treat, medically It is equal to and takes life killer by force, is global first big lethal factor.Wherein, lung cancer is that current morbidity and mortality growth is most fast, right One of population health and the maximum thoracic cavity portion malignant tumour of life threat.The cause of disease of lung cancer is not completely clear so far, but has big The data of amount shows that the carcinogen contained in long-term a large amount of smokings, urban atmospheric pollution and flue dust all may cause lung cancer Generation.In addition to this, cancer of the esophagus is also common thoracic cavity portion tumour, and according to statistics, there are about 300,000 people to die of food every year in the whole world Pipe cancer, and China is then one of cancer of the esophagus hotspot in the world, every annual are died of illness about 150,000 people.It is swollen for this kind of thoracic cavity portion The treatment of tumor, usually based on operation, chemicotherapy, biotherapy combination therapy.With the development of science and technology, radiotherapy has been Become one of the three big important means for the treatment of malignant tumour, about 60%~70% tumour patient needs to receive radiation and controls It treats.And in radiotherapy, delineate jeopardize organ (OAR) it is carried out protection be a particularly important task.
A kind for the treatment of means of the radiation therapy technology as non-intervention type, it is pernicious that oneself is become a kind of very important treatment The means of tumour.In order to realize accurate radiotherapy, doctor needs before carrying out radiotherapy, according to the practical feelings of patient Condition delineates CT image and formulates accurate radiotherapy planning.And in radiotherapy planning, according to radiotherapy planning, in image The organ that jeopardizes delineated is protected, to avoid radioactive ray unrelated injury caused by it.
Due to current medical resource limitation and experienced doctor's resource it is in short supply, a large amount of CT images delineate task It is also a kind of great burden for doctor, not only low efficiency, but also subjectivity is strong, it is inaccurate to lead to delineate result, influences The accuracy of radiotherapy treatment planning and the curative effect for the treatment of.Therefore it is necessary to which providing a kind of chest CT jeopardizes organs automatic segmentation side Method solves the above problems.
Summary of the invention
In order to solve problems in the prior art, jeopardize device based on convolutional neural networks the embodiment of the invention provides a kind of Official's automatic division method jeopardizes the fully automated segmentation of organ, segmentation low efficiency, segmentation result to overcome the prior art can not achieve Precision it is low and segmentation after image effect it is poor the problems such as.
In order to solve the above technical problems, the technical solution adopted by the present invention is that:
A kind of to jeopardize organs automatic segmentation method based on convolutional neural networks, described method includes following steps:
S1: CT picture of patient data and corresponding labeled data are obtained;
S2: the CT graph data and corresponding labeled data are pre-processed;
S3: establishing 3D convolutional neural networks model, and input block obtains the prediction result image of model output;
S4: the prediction result image of the output of 3D convolutional neural networks model described in optimization processing.
Further, the step S2 is specifically included:
S2.1: CT image data is cleaned;
S2.2: the CT image data after cleaning is reconstructed;
S2.3: according to the CT image data building training after reconstruct, validation data set.
Further, the step S2.1 is specifically included:
S2.1.1: removal noise data screens CT image data, removes the data with imaging artefacts;
S2.1.2: the data of different CT images are standardized;
S2.1.3: point coordinate position of the labeled data in solid axes is converted to image by labeled data standardization Location of pixels in reference axis;
S2.1.4: the coordinate points of labeled data are connected into enclosed region, and carry out the filling of respective classes to it.
Further, the step S2.2 is specifically included:
S2.2.1: two-dimensional CT image is arranged according to actual relative position and is recombinated, reconstructs three-dimensional by 3-D image reconstruct Image;
S2.2.2: three-dimensional data block grab sample obtains random data block by way of choosing random centre coordinate, Wherein, data block center is selected center;
S2.2.3: the data block of low resolution corresponding with the random data block is obtained.
Further, the step S2.3 is specifically included:
S2.3.1: the data block and labeled data of the random data block and its corresponding low resolution are obtained, will be marked In Different Organs distinguished with different labels;
S2.3.2: the data set of mark will is divided into training dataset and validation data set.
Further, the step S3 is specifically included:
S3.1: binary channels 3D convolutional neural networks model is built, the data that the training data is concentrated are inputted into mould respectively In two channels of type;
S3.2: the data that the training data is concentrated are respectively put into progress 3D convolution operation in convolutional layer, obtain feature Figure;
S3.3: batch normalization operation is carried out to the characteristic pattern that convolution obtains, and the characteristic pattern after batch normalization is carried out non- Linear activation;
S3.4: repeating the operation of step S3.2 and step S3.3 to the characteristic pattern after activation respectively, until main channel generates The data block of 9x9x9, secondary channels generate the data block of 3x3x3, up-sample to the data block of 3x3x3, generation and main channel The consistent data block of output size;
S3.5: carrying out cascade operation to the resulting output block of binary channels, and put it into the convolutional layer of 1x1x1 into The full attended operation of row, obtains the prediction result image of model output.
Further, the step S3 further include:
Hyper parameter, the training 3D convolutional neural networks model are set.
Further, the setting hyper parameter, the training 3D convolutional neural networks model specifically include:
It determines the value range of hyper parameter, then uses trellis search method, tested respectively with every group of parameter, choose Optimal hyper parameter.
Further, the step S4 is specifically included:
S4.1: the connected region of the prediction result image is re-flagged;
S4.2: set up the condition random field models, using original CT image and prediction result image to the condition random field Model is trained, the image after obtaining optimization.
Further, the step S4.1 is specifically included:
S4.1.1: carrying out binarization operation to the prediction result image, sets 255 for foreground image gray value, back Scape gray value of image is set as 0;
S4.1.2: the prediction result image is traversed, the region of each connection is found;
S4.1.3: re-flagging the region of each connection of acquisition, so that the label of same connection is identical.
Further, the step S4.2 is specifically included:
S4.2.1: separating each classification of the prediction result image, obtains each pixel in image and belongs to some classification Probability graph;
S4.2.2: set up the condition random field models predict that a certain pixel belongs to the probability of each classification;
S4.2.3: the conditional random field models are instructed using original CT image and each class probability figure Practice, the image after obtaining optimization.
Technical solution provided in an embodiment of the present invention has the benefit that
1, provided in an embodiment of the present invention to jeopardize organs automatic segmentation method based on convolutional neural networks, it is only necessary to obtain CT image data, so that the acquisition difficulty of initial data is smaller, application range is wider;
2, provided in an embodiment of the present invention to jeopardize organs automatic segmentation method based on convolutional neural networks, utilize existing number According to resource, in conjunction with machine learning under artificial intelligence field, the related algorithm of deep learning and computer vision, radiotherapy is researched and developed CT image jeopardizes organs automatic segmentation scheme in journey, realizes the automatic segmentation for jeopardizing organ in CT images, and cutting procedure is not necessarily to Artificial interference effectively improves segmentation efficiency and segmentation result precision, and to the operating pressure for mitigating doctor, reduce see a doctor at This, improves medical treatment efficiency, alleviates that medical resource is in short supply is all of great significance, and medical resource shortage can be effectively relieved Problem pushes medical industry development;
3, provided in an embodiment of the present invention to jeopardize organs automatic segmentation method based on convolutional neural networks, after increasing Processing operation advanced optimizes segmentation result, removes noise spot and small noise block discrete in image, so that output knot Fruit be it is smooth after image.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is shown according to an exemplary embodiment to jeopardize organs automatic segmentation method based on convolutional neural networks Flow chart;
Fig. 2 is shown according to an exemplary embodiment to be located in advance to CT graph data and corresponding labeled data The flow chart of reason;
Fig. 3 be it is shown according to an exemplary embodiment establish 3D convolutional neural networks model, input block obtains mould The flow chart of the prediction result image of type output;
Fig. 4 is 3D convolutional neural networks model structure schematic diagram shown according to an exemplary embodiment;
Fig. 5 is the pre- of the output of 3D convolutional neural networks model described in optimization processing shown according to an exemplary embodiment Survey the flow chart of result images.
Fig. 6 is the schematic diagram of pondization operation shown according to an exemplary embodiment;
Fig. 7 is the schematic diagram of the process of up-sampling shown according to an exemplary embodiment.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached in the embodiment of the present invention Figure, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only this Invention a part of the embodiment, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art exist Every other embodiment obtained under the premise of creative work is not made, shall fall within the protection scope of the present invention.
Fig. 1 is shown according to an exemplary embodiment to jeopardize organs automatic segmentation method based on convolutional neural networks Flow chart, shown referring to Fig.1, described method includes following steps:
S1: CT picture of patient data and corresponding labeled data are obtained.
Specifically, obtaining CT picture of patient data from hospital imaging department, and corresponding mark number is obtained from mark platform According to, wherein labeled data include patient chest jeopardize organ delineate data.Due to need to only obtain CT image data and its mark Data are infused, therefore, the acquisition difficulty of initial data is smaller, and application range is wider.
S2: the CT graph data and corresponding labeled data are pre-processed.
Specifically, being pre-processed to the CT graph data and corresponding labeled data that get, wherein pretreatment packet It includes: data cleansing, data reconstruction and building training, validation data set etc..
S3: establishing 3D convolutional neural networks model, and input block obtains the prediction result image of model output.
Specifically, jeopardizing in organs automatic segmentation method based on convolutional neural networks provided in an embodiment of the present invention, takes What is built is binary channels 3D convolutional neural networks model, after model buildings have been got well, by above-mentioned training dataset and validation data set Respectively in input model, model is trained and is verified, and obtains the prediction result image of model output.
Divided automatically by building binary channels 3D convolutional neural networks model to organ is jeopardized in CT images, was divided Cheng Wuxu artificial interference reduces medical treatment cost to the operating pressure for mitigating doctor, improves medical treatment efficiency, and it is tight to alleviate medical resource It lacks and is all of great significance, and the problem of medical resource lacks can be effectively relieved, push medical industry development.
S4: the prediction result image of the output of 3D convolutional neural networks model described in optimization processing.
Specifically, the prediction result image of 3D convolutional neural networks model output is further processed, optimize Prediction result image removes discrete noise spot and small noise block in image so that output result be it is smooth after image.
Fig. 2 is shown according to an exemplary embodiment to pre-process to CT graph data and corresponding labeled data Flow chart pretreatment is carried out to CT graph data and corresponding labeled data and is specifically included referring to shown in Fig. 2:
S2.1: CT image data is cleaned.
Further, CT image data clean and specifically include:
S2.1.1: removal noise data screens CT image data, removes the data with imaging artefacts.
Specifically, CT image data is in imaging process probably due to hardware system failure or error and human factor cause Artifact.The formation of artifact will cause the decline of picture quality, or even influence the analysis and diagnosis to lesion.Therefore, scheme for CT Picture needs to screen it, rejects the data with obvious imaging artefacts.
S2.1.2: the data of different CT images are standardized.
Specifically, the window position of different CT images may also be different from window width due to the otherness of equipment and acquisition parameter, figure As therefore gray scale may generate larger difference, it need to be standardized.By the original DICOM file information for reading CT image In window position and window width, primitive character is subjected to map function, specific formula for calculation is as follows:
Wmin=Wcenter-0.5×Wwidth
Wmax=Wcenter+0.5×Wwidth
Wherein, WcenterAnd WwidthThe window position in DICOM file and window width information are respectively indicated, I is all pixels on image Set, Ii ∈ I.
S2.1.3: point coordinate position of the labeled data in solid axes is converted to image by labeled data standardization Location of pixels in reference axis.
Specifically, point coordinate position of the labeled data in solid axes is converted to the pixel position in image coordinate axis It sets, passes through picture position (Image Position) in DICOM file information and pel spacing (Pixel Spacing) Correlativity is converted, and conversion formula is as follows:
Wherein, [xi, yi] indicate the point coordinate marked, [xnew_i, ynew_i] indicate the location of pixels after conversion, [x*, y*] Indicate picture position, i.e. x of the upper left corner of image in space coordinates, y-coordinate, pixel_spacing is to be read Pel spacing parameter in DICOM file.
S2.1.4: the coordinate points of labeled data are connected into enclosed region, and carry out the filling of respective classes to it.
Specifically, sequentially generating the straight line between two neighboring coordinate by bresenham algorithm, closed loop is formed, is used in combination FillPoly method in the library opencv is filled enclosed region.
S2.2: the CT image data after cleaning is reconstructed.
Further, the CT image data after cleaning is reconstructed and is specifically included:
S2.2.1: two-dimensional CT image is arranged according to actual relative position and is recombinated, reconstructs three-dimensional by 3-D image reconstruct Image.
Specifically, the above-mentioned CT picture of patient data obtained from hospital imaging department are generally two dimensional image, by out-of-order two dimension Image is arranged according to actual relative position (slice location) and is recombinated, and reconstructs 3-D image.
S2.2.2: three-dimensional data block grab sample obtains random data block by way of choosing random centre coordinate, Wherein, data block center is selected center.
Specifically, the binary channels 3D convolutional neural networks model that the present invention uses is block-based deep neural network mould Type, in order to reduce computing cost, reduce training time, raising model efficiency, it is therefore desirable to the 3 d image data reconstructed Data block stochastical sampling is carried out with corresponding labeled data.The embodiment of the present invention is obtained by way of choosing random centre coordinate Random data block, data block are preferably sized to 25x25x25, and data block center is selected center (the random center chosen The corresponding point of coordinate).
S2.2.3: the data block of low resolution corresponding with the random data block is obtained.
Specifically, based on algorithm requirement, it is also necessary to obtain corresponding with random data block acquired in step S2.2.2 Low resolution but high receptive field data block.When it is implemented, can be sat by the random center chosen in step S2.2.2 Mark, obtains the data block of 3 times of sizes of former data, and pondization operation is carried out to it, generates the low resolution that size is 19x19x19 Data block.Pondization operation herein is carried out using maxpooling, selects maximum number in four sub-boxes, pondization operation tool Body is referring to shown in Fig. 6.
S2.3: according to the CT image data building training after reconstruct, validation data set.
Further, it is specifically included according to the CT image data building training after reconstruct, validation data set:
S2.3.1: the data block and labeled data of the random data block and its corresponding low resolution are obtained, will be marked In Different Organs distinguished with different labels.
Specifically, the data block and mark number of the random data block and its corresponding low resolution obtained for above-mentioned steps According to the different labels of the Different Organs in mark are distinguished.For example, setting reset as background, " 1 " is oesophagus, and " 2 " are heart, " 3 " are backbone, and " 4 " are left lung, and " 5 " are right lung.
S2.3.2: the data of mark will are divided into training dataset and validation data set.
Specifically, the data for having mark are divided into training dataset and validation data set two parts.The embodiment of the present invention Using 80% initial data as training dataset to training pattern, 20% data are as validation data set for verifying Model.
Fig. 3 be it is shown according to an exemplary embodiment establish 3D convolutional neural networks model, input block obtains mould The flow chart of the prediction result image of type output establishes 3D convolutional neural networks model, input block obtains referring to shown in Fig. 3 The prediction result image of modulus type output specifically includes:
S3.1: binary channels 3D convolutional neural networks model is built, the data that the training data is concentrated are inputted into mould respectively In two channels of type.
Specifically, being directed to segmentation task, binary channels 3D convolutional neural networks model is built.The model is one and is instructed based on block Experienced full convolutional neural networks, have channel structure, and main channel handles normal resolution data block (and above-mentioned random data Block), secondary channels handle high-resolution data block (and data block of low resolution corresponding with random data block).Main channel obtains Image detail information, secondary channels obtain image location information and surrounding Global Information.
The training dataset obtained in above-mentioned steps and validation data set are distinguished in two channels of input model, to mould Type is trained and verifies, and obtains the prediction result image of model output.
S3.2: the data that the training data is concentrated are respectively put into progress 3D convolution operation in convolutional layer, obtain feature Figure.
Specifically, the data in the data block of random data block and its corresponding low resolution are respectively put into convolutional layer C1i (K1i, F1i) and C2i(K2i, F2i), 3D convolution operation is carried out in (i=1), wherein K1iIndicate the convolution of the i-th layer of convolution in main channel Core size, K2iIndicate the convolution kernel size of i-th layer of convolution of secondary channels, F1iAnd F2iRespectively indicate i-th layer of main channel and secondary channels The characteristic pattern number of convolution output.
S3.3: batch normalization operation is carried out to the characteristic pattern that convolution obtains, and the characteristic pattern after batch normalization is carried out non- Linear activation.
Specifically, carrying out batch normalization operation to the characteristic pattern that convolution obtains, and the characteristic pattern after batch normalization is carried out Nonlinear activation, the activation primitive in the embodiment of the present invention use Relu function.
S3.4: repeating the operation of step S3.2 and step S3.3 to the characteristic pattern after activation respectively, until main channel generates The data block of 9x9x9, secondary channels generate the data block of 3x3x3, up-sample to the data block of 3x3x3, generation and main channel The consistent data block of output size.
Specifically, it is directed to i=2, and 3,4 ..., step S3.2 and step S3.3 is repeated to characteristic pattern resulting after activation respectively Operation, until main channel ultimately produces the data block of 9x9x9, secondary channels generate the data block of 3x3x3.To the data of 3x3x3 Block is up-sampled (i.e. deconvolution), is generated and (is also generated the data of 9x9x9 with the consistent data block of main channel output size Block).The process of up-sampling is referring to shown in Fig. 7.
S3.5: carrying out cascade operation to the resulting output block of binary channels, and put it into the convolutional layer of 1x1x1 into The full attended operation of row, obtains the prediction result image of model output.
Specifically, Fig. 4 is 3D convolutional neural networks model shown according to an exemplary embodiment referring to shown in Fig. 4 The size of structural schematic diagram, the prediction result image of last model output is 9x9x9.
Further, the step S3 further include:
Hyper parameter, the training 3D convolutional neural networks model are set.
Specifically, the objective function that binary channels 3D convolutional neural networks use is softmax friendship in the embodiment of the present invention Entropy function is pitched, is defined as follows:
Wherein, E (t, y) indicates expectation, and t, y respectively indicate the target labels and function softmax output of neural network.
Optimal method used in 3D convolutional neural networks model is RMSprop, and formula is as follows:
Wherein, α is learning rate, and t indicates that the number of epoch, g indicate gradient, gtThe gradient of as t step, θtFor t step Model parameter, ε is smooth item, for avoid denominator be 0, general value be 1e-8.
Further, the setting hyper parameter, the training 3D convolutional neural networks model specifically include:
For 3D convolutional neural networks model, the value range of hyper parameter can be first determined, then use grid search side Method selects optimal hyper parameter.For example, it is assumed that a model has N number of parameter, there are n by each parameter PiiA candidate value passes through row Column combination, will generateKind parameter combination mode, is tested with every group of parameter respectively, searches for optimal parameter.
Fig. 5 is the prediction of the output of 3D convolutional neural networks model described in optimization processing shown according to an exemplary embodiment The flow chart of result images, referring to shown in Fig. 3, the prediction result image of the output of 3D convolutional neural networks model described in optimization processing It specifically includes:
S4.1: the connected region of the prediction result image is re-flagged.
Specifically, since the prediction result image of 3D convolutional neural networks model output is possible to occur intermediate to obscure Problem, it is therefore desirable to which the region for each connection for stating prediction result image is re-flagged.
Further, the connected region of the prediction result image re-flag specifically including:
S4.1.1: carrying out binarization operation to the prediction result image, sets 255 for foreground image gray value, back Scape gray value of image is set as 0.
Specifically, binarization operation is carried out to prediction result image, by calling the threshold letter in the library OpenCV Number sets 255 for foreground image gray value, and background image gray value is set as 0.Since prediction result uses label " 0 " for back Scape, " 1 " are oesophagus, and " 2 " are heart, and " 3 " are backbone, and " 4 " are left lung, and " 5 " are right lung, so using in threshold function Threshold value be 0, i.e. the gray scale of pixel of the label greater than 0 is set to 255.
S4.1.2: the prediction result image is traversed, the region of each connection is found.
Specifically, the pixel adjacent fashion used in ergodic process is 4- neighborhood, i.e., when A pixel is the left up and down of B pixel When right any pixel, we claim A pixel to be connected to B pixel.
S4.1.3: re-flagging the region of each connection of acquisition, so that the label of same connection is identical.
S4.2: set up the condition random field models, using original CT image and prediction result image to the condition random field Model is trained, the image after obtaining optimization.
Further, the step S4.2 is specifically included:
S4.2.1: separating each classification of the prediction result image, obtains each pixel in image and belongs to some classification Probability graph.
Specifically, obtaining softmax function output result in prediction model, (i.e. the output of 3D convolutional neural networks model is pre- Survey result images), each classification of the prediction result image is separated, each pixel in image is obtained and belongs to the general of some classification Rate figure, number are identical as class number.
S4.2.2: set up the condition random field models predict that a certain pixel belongs to the probability of each classification.
Specifically, conditional random field of embodiment of the present invention modeling formula is as follows:
Wherein, subscript i indicates that the node location (T is total node number) being currently located, subscript k indicate kth characteristic function (M is total characteristic), λkIndicate its weight, then P (I | O) is illustrated under conditions of a given observation sequence O, condition random Field finds out the probability of the hidden status switch I come.I.e. in the picture, under conditions of known a certain pixel surrounding pixel classification, in advance Survey the probability that the pixel belongs to each classification.
S4.2.3: the conditional random field models are instructed using original CT image and each class probability figure Practice, the image after obtaining optimization.
Specifically, being instructed using original CT image and each class probability figure to the conditional random field models Practice, export be it is smooth after prediction result image.The operation can remove noise spot and small noise block discrete in image.
In conclusion technical solution provided in an embodiment of the present invention has the benefit that
1, provided in an embodiment of the present invention to jeopardize organs automatic segmentation method based on convolutional neural networks, it is only necessary to obtain CT image data, so that the acquisition difficulty of initial data is smaller, application range is wider;
2, provided in an embodiment of the present invention to jeopardize organs automatic segmentation method based on convolutional neural networks, utilize existing number According to resource, in conjunction with machine learning under artificial intelligence field, the related algorithm of deep learning and computer vision, radiotherapy is researched and developed CT image jeopardizes organs automatic segmentation scheme in journey, realizes the automatic segmentation for jeopardizing organ in CT images, and cutting procedure is not necessarily to Artificial interference effectively improves segmentation efficiency and segmentation result precision, and to the operating pressure for mitigating doctor, reduce see a doctor at This, improves medical treatment efficiency, alleviates that medical resource is in short supply is all of great significance, and medical resource shortage can be effectively relieved Problem pushes medical industry development;
3, provided in an embodiment of the present invention to jeopardize organs automatic segmentation method based on convolutional neural networks, after increasing Processing operation advanced optimizes segmentation result, removes noise spot and small noise block discrete in image, so that output knot Fruit be it is smooth after image.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (11)

1. a kind of jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that the method includes as follows Step:
S1: CT picture of patient data and corresponding labeled data are obtained;
S2: the CT graph data and corresponding labeled data are pre-processed;
S3: establishing 3D convolutional neural networks model, and input block obtains the prediction result image of model output;
S4: the prediction result image of the output of 3D convolutional neural networks model described in optimization processing.
2. according to claim 1 jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that institute Step S2 is stated to specifically include:
S2.1: CT image data is cleaned;
S2.2: the CT image data after cleaning is reconstructed;
S2.3: according to the CT image data building training after reconstruct, validation data set.
3. according to claim 2 jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that institute Step S2.1 is stated to specifically include:
S2.1.1: removal noise data screens CT image data, removes the data with imaging artefacts;
S2.1.2: the data of different CT images are standardized;
S2.1.3: point coordinate position of the labeled data in solid axes is converted to image coordinate by labeled data standardization Location of pixels in axis;
S2.1.4: the coordinate points of labeled data are connected into enclosed region, and carry out the filling of respective classes to it.
4. according to claim 2 jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that institute Step S2.2 is stated to specifically include:
S2.2.1: two-dimensional CT image is arranged according to actual relative position and is recombinated, reconstructs three-dimensional figure by 3-D image reconstruct Picture;
S2.2.2: three-dimensional data block grab sample obtains random data block by way of choosing random centre coordinate, wherein Data block center is selected center;
S2.2.3: the data block of low resolution corresponding with the random data block is obtained.
5. according to claim 2 jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that institute Step S2.3 is stated to specifically include:
S2.3.1: obtaining the data block and labeled data of the random data block and its corresponding low resolution, will be in mark Different Organs are distinguished with different labels;
S2.3.2: the data set of mark will is divided into training dataset and validation data set.
6. according to claim 1 jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that institute Step S3 is stated to specifically include:
S3.1: building binary channels 3D convolutional neural networks model, the data difference input model that the training data is concentrated In two channels;
S3.2: the data that the training data is concentrated are respectively put into progress 3D convolution operation in convolutional layer, obtain characteristic pattern;
S3.3: batch normalization operation is carried out to the characteristic pattern that convolution obtains, and the characteristic pattern after batch normalization is carried out non-linear Activation;
S3.4: repeating the operation of step S3.2 and step S3.3 to the characteristic pattern after activation respectively, until main channel generates 9x9x9 Data block, secondary channels generate the data block of 3x3x3, up-sample to the data block of 3x3x3, generate big with main channel output Small consistent data block;
S3.5: carrying out cascade operation to the resulting output block of binary channels, and put it into the convolutional layer of 1x1x1 carry out it is complete Attended operation obtains the prediction result image of model output.
7. jeopardize organs automatic segmentation method based on convolutional neural networks according to claim 1 or described in 6 any one, It is characterized in that, the step S3 further include:
Hyper parameter, the training 3D convolutional neural networks model are set.
8. according to claim 7 jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that institute Setting hyper parameter is stated, the training 3D convolutional neural networks model specifically includes:
It determines the value range of hyper parameter, then uses trellis search method, tested respectively with every group of parameter, choose optimal Hyper parameter.
9. according to claim 1 jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that institute Step S4 is stated to specifically include:
S4.1: the connected region of the prediction result image is re-flagged;
S4.2: set up the condition random field models, using original CT image and prediction result image to the conditional random field models It is trained, the image after obtaining optimization.
10. according to claim 9 jeopardize organs automatic segmentation method based on convolutional neural networks, which is characterized in that The step S4.1 is specifically included:
S4.1.1: binarization operation is carried out to the prediction result image, sets 255 for foreground image gray value, Background As gray value is set as 0;
S4.1.2: the prediction result image is traversed, the region of each connection is found;
S4.1.3: re-flagging the region of each connection of acquisition, so that the label of same connection is identical.
11. according to claim 9 or 10 jeopardize organs automatic segmentation method based on convolutional neural networks, feature exists In the step S4.2 is specifically included:
S4.2.1: separating each classification of the prediction result image, obtains each pixel in image and belongs to the general of some classification Rate figure;
S4.2.2: set up the condition random field models predict that a certain pixel belongs to the probability of each classification;
S4.2.3: the conditional random field models are trained using original CT image and each class probability figure, are obtained Image after taking optimization.
CN201810991434.0A 2018-08-28 2018-08-28 Automatic segmentation method for organs at risk based on convolutional neural network Active CN109300136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810991434.0A CN109300136B (en) 2018-08-28 2018-08-28 Automatic segmentation method for organs at risk based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810991434.0A CN109300136B (en) 2018-08-28 2018-08-28 Automatic segmentation method for organs at risk based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN109300136A true CN109300136A (en) 2019-02-01
CN109300136B CN109300136B (en) 2021-08-31

Family

ID=65165622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810991434.0A Active CN109300136B (en) 2018-08-28 2018-08-28 Automatic segmentation method for organs at risk based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN109300136B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163867A (en) * 2019-04-02 2019-08-23 成都真实维度科技有限公司 A method of divided automatically based on lesion faulted scanning pattern
CN110211139A (en) * 2019-06-12 2019-09-06 安徽大学 Automatic segmentation Radiotherapy of Esophageal Cancer target area and the method and system for jeopardizing organ
CN110428375A (en) * 2019-07-24 2019-11-08 东软医疗系统股份有限公司 A kind of processing method and processing device of DR image
CN110517257A (en) * 2019-08-30 2019-11-29 北京推想科技有限公司 Jeopardize organ markup information processing method and relevant apparatus
CN110717913A (en) * 2019-09-06 2020-01-21 浪潮电子信息产业股份有限公司 Image segmentation method and device
CN111127444A (en) * 2019-12-26 2020-05-08 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111462100A (en) * 2020-04-07 2020-07-28 广州柏视医疗科技有限公司 Detection equipment based on novel coronavirus pneumonia CT detection and use method thereof

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087562A1 (en) * 2010-10-06 2012-04-12 Isaacs Robert E Imaging System and Method for Surgical and Interventional Medical Procedures
CN105654425A (en) * 2015-12-07 2016-06-08 天津大学 Single-image super-resolution reconstruction method applied to medical X-ray image
CN106920234A (en) * 2017-02-27 2017-07-04 北京连心医疗科技有限公司 A kind of method of the automatic radiotherapy planning of combined type
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
US20170307716A1 (en) * 2014-10-10 2017-10-26 Koninklijke Philips N.V. Propeller mr imaging with artefact suppression
CN107358600A (en) * 2017-06-14 2017-11-17 北京全域医疗技术有限公司 Automatic hook Target process, device and electronic equipment in radiotherapy planning
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN107942271A (en) * 2017-12-01 2018-04-20 杭州电子科技大学 SPEED rapid magnetic resonance imaging methods based on iteration
CN108053417A (en) * 2018-01-30 2018-05-18 浙江大学 A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN108198184A (en) * 2018-01-09 2018-06-22 北京理工大学 The method and system of contrastographic picture medium vessels segmentation
CN108428229A (en) * 2018-03-14 2018-08-21 大连理工大学 It is a kind of that apparent and geometric properties lung's Texture Recognitions are extracted based on deep neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087562A1 (en) * 2010-10-06 2012-04-12 Isaacs Robert E Imaging System and Method for Surgical and Interventional Medical Procedures
US20170307716A1 (en) * 2014-10-10 2017-10-26 Koninklijke Philips N.V. Propeller mr imaging with artefact suppression
CN105654425A (en) * 2015-12-07 2016-06-08 天津大学 Single-image super-resolution reconstruction method applied to medical X-ray image
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN106920234A (en) * 2017-02-27 2017-07-04 北京连心医疗科技有限公司 A kind of method of the automatic radiotherapy planning of combined type
CN107358600A (en) * 2017-06-14 2017-11-17 北京全域医疗技术有限公司 Automatic hook Target process, device and electronic equipment in radiotherapy planning
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN107942271A (en) * 2017-12-01 2018-04-20 杭州电子科技大学 SPEED rapid magnetic resonance imaging methods based on iteration
CN108198184A (en) * 2018-01-09 2018-06-22 北京理工大学 The method and system of contrastographic picture medium vessels segmentation
CN108053417A (en) * 2018-01-30 2018-05-18 浙江大学 A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN108428229A (en) * 2018-03-14 2018-08-21 大连理工大学 It is a kind of that apparent and geometric properties lung's Texture Recognitions are extracted based on deep neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MOHAMMAD S. ALAM等: "High-resolution infrared image reconstruction using multiple randomly shifted low-resolution aliased frames", 《PROCEEDINGS VOLUME 3063, INFRARED IMAGING SYSTEMS: DESIGN, ANALYSIS, MODELING, AND TESTING VIII》 *
ZHWHONG: "LIDC-IDRI肺结节公开数据集Dicom和XML标注详解", 《简书HTTPS://WWW.JIANSHU.COM/P/C4E9E18195EB》 *
门阔等: "利用深度反卷积神经网络自动勾画放疗危及器官", 《中国医学物理学杂志》 *
龙法宁等: "基于深层卷积网络的单幅图像超分辨率重建模型", 《广西科学》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163867A (en) * 2019-04-02 2019-08-23 成都真实维度科技有限公司 A method of divided automatically based on lesion faulted scanning pattern
CN110211139A (en) * 2019-06-12 2019-09-06 安徽大学 Automatic segmentation Radiotherapy of Esophageal Cancer target area and the method and system for jeopardizing organ
CN110428375A (en) * 2019-07-24 2019-11-08 东软医疗系统股份有限公司 A kind of processing method and processing device of DR image
CN110428375B (en) * 2019-07-24 2024-03-01 东软医疗系统股份有限公司 DR image processing method and device
CN110517257A (en) * 2019-08-30 2019-11-29 北京推想科技有限公司 Jeopardize organ markup information processing method and relevant apparatus
CN110717913A (en) * 2019-09-06 2020-01-21 浪潮电子信息产业股份有限公司 Image segmentation method and device
WO2021042641A1 (en) * 2019-09-06 2021-03-11 浪潮电子信息产业股份有限公司 Image segmentation method and apparatus
CN110717913B (en) * 2019-09-06 2022-04-22 浪潮电子信息产业股份有限公司 Image segmentation method and device
CN111127444A (en) * 2019-12-26 2020-05-08 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111127444B (en) * 2019-12-26 2021-06-04 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111462100A (en) * 2020-04-07 2020-07-28 广州柏视医疗科技有限公司 Detection equipment based on novel coronavirus pneumonia CT detection and use method thereof

Also Published As

Publication number Publication date
CN109300136B (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN109300136A (en) It is a kind of to jeopardize organs automatic segmentation method based on convolutional neural networks
Zhang et al. ME‐Net: multi‐encoder net framework for brain tumor segmentation
CN111709953B (en) Output method and device in lung lobe segment segmentation of CT (computed tomography) image
Zhu et al. AnatomyNet: deep learning for fast and fully automated whole‐volume segmentation of head and neck anatomy
Fechter et al. Esophagus segmentation in CT via 3D fully convolutional neural network and random walk
JP2022525198A (en) Deep convolutional neural network for tumor segmentation using positron emission tomography
Pan et al. 2D medical image synthesis using transformer-based denoising diffusion probabilistic model
CN107622492A (en) Lung splits dividing method and system
CN108010021A (en) A kind of magic magiscan and method
CN107203989A (en) End-to-end chest CT image dividing method based on full convolutional neural networks
JP2019114262A (en) Medical image processing apparatus, medical image processing program, learning apparatus and learning program
Pradhan et al. Lung cancer detection using 3D convolutional neural networks
Nguyen et al. 3D Unet generative adversarial network for attenuation correction of SPECT images
Ayub et al. LSTM-based RNN framework to remove motion artifacts in dynamic multicontrast MR images with registration model
Baydoun et al. Dixon-based thorax synthetic CT generation using Generative Adversarial Network
Abdi et al. GAN-enhanced conditional echocardiogram generation
Saha et al. A survey on artificial intelligence in pulmonary imaging
CN116630738A (en) Energy spectrum CT imaging method based on depth convolution sparse representation reconstruction network
Kim et al. CNN-based CT denoising with an accurate image domain noise insertion technique
CN115690423A (en) CT sequence image liver tumor segmentation method based on deep learning
CN115841457A (en) Three-dimensional medical image segmentation method fusing multi-view information
Lin et al. Usformer: A Light Neural Network for Left Atrium Segmentation of 3D LGE MRI
Parages et al. A Naive-Bayes model observer for a human observer in detection, localization and assessment of perfusion defects in SPECT
Kening et al. Nested recurrent residual unet (nrru) on gan (nrrg) for cardiac ct images segmentation task
Grewal et al. Learning Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer Radiation Treatment from Clinically Available Annotations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240306

Address after: Room 1179, W Zone, 11th Floor, Building 1, No. 158 Shuanglian Road, Qingpu District, Shanghai, 201702

Patentee after: Shanghai Zhongan Information Technology Service Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: ZHONGAN INFORMATION TECHNOLOGY SERVICE Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240415

Address after: Room 1179, W Zone, 11th Floor, Building 1, No. 158 Shuanglian Road, Qingpu District, Shanghai, 201702

Patentee after: Shanghai Zhongan Information Technology Service Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: ZHONGAN INFORMATION TECHNOLOGY SERVICE Co.,Ltd.

Country or region before: China