A kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network
Technical field
The present invention relates to field of image processings, deep learning field, medical field, especially a kind of based on deep semantic point
Cut the automatic identification nasopharyngeal carcinoma primary tumo(u)r method of network.
Background technique
In medical domain, strength regulated shape-adapted radiation therapeutic technology can greatly improve the survival rate and life matter of cancer patient
Amount.But this advanced treatment method needs accurately to judge the profile of target tumor, the system of radiation treatment plan
Surely it needs to spend radiotherapy department doctor's time a few hours.Simultaneously, it is contemplated that in next ten years, cancer morbidity rises
It will continue the exacerbation of leading global medical insurance burden.Especially it can recorde every year in China and south east asia according to statistics
Up to 60000 new nasopharyngeal carcinoma cases.Since rhinitis adjoins basis cranii, there is basis cranii when medical in 60%~70% patient
The destruction of sclerotin, 18% patient even with encephalic and or cavernous sinus infringement, operation is difficult.Simultaneously 85% patient with
The transfer of neck or throat lymph node is not suitable for operation.Nasopharyngeal carcinoma is divided into three types by World Health Organization: angling squamous cell
Cancer, non-cancroid, undifferentiated carcinoma.In China and south east asia, 95% nasopharyngeal carcinoma is undifferentiated carcinoma, in remaining 5% absolutely
Most of is non-cancroid, has Medium sensitivity to radiotherapy, radiotherapy is the primary treatments of nasopharyngeal carcinoma.To protect
Card enables more patients to receive timely and effectively radiotherapy under limited medical therapy resource, simplifies radiotherapy doctor
Workflow, improve Radiation treatment plans formulation efficiency it is most important.
In recent years, people produce keen interest to the diagnosis for exploring artificial intelligence auxiliary doctor and carrying out disease, and
Certain fields establish the mathematical model that performance is better than human expert using AI algorithm.Wherein, primary in description nasopharyngeal carcinoma automatically
Property tumour Primary Study in, it can be found that artificial intelligence (AI) be used as a kind of strong method, normal tissue segmentation appoint
Comparable advantage is shown in business.Therefore it is drawn it is believed that constructing the tumor's profiles based on AI using deep learning
Tool implements the contour machining method of AI auxiliary in radiotherapy treatment planning workflow, can effectively improve medical worker
Working efficiency.Under the ever-increasing background of demand to radiotherapy, to the middle and low income country for lacking radiotherapy resource
It is particularly attractive.This method can be extended suitable for every other cancer types simultaneously, to the following change radiotherapy
Workflow brings substantive propulsion.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of based on the automatic of deep semantic segmentation network
Identify nasopharyngeal carcinoma primary tumo(u)r method.
In order to solve the above technical problems, the technical solution adopted by the present invention is that:
A kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network, comprising the following steps:
S1: the magnetic resonance three-dimensional image of patient is acquired, and preliminary data are carried out to the magnetic resonance three-dimensional image and are located in advance
Reason;
S2: the correction for carrying out gray-scale deviation field to the magnetic resonance three-dimensional image after data prediction is handled, and makes same image
In gray value in identical tissue it is more uniform;
S3: being handled magnetic resonance three-dimensional image using improved Histogram Matching algorithm, passes through one gray scale of training
Mapping function matches the histogram of image and the histogram for the template image selected in advance, makes identical in different images
Gray value in tissue is more close;
S4: the ROI region of interception magnetic resonance three-dimensional image, and being divided into 2*2 has the patch of overlapping as the defeated of model
Enter;
S5: constructing and training deep semantic segmentation network identifies primary tumo(u)r;
S6: multiple patch are input to the identification that the deep semantic segmentation network trained carries out primary tumo(u)r;
S7: the recognition result of multiple patch of output is merged, final primary tumo(u)r recognition result is obtained;
S8: primary tumo(u)r recognition result is post-processed using mean field iterative algorithm.
Preferably, in step sl, the magnetic resonance three-dimensional image include four different data sequence T1, T1C,
T1FSC and T2, the data of each sequence are a 3-D image, and utilize down-sampling, specification to this four 3-D images
Change is pre-processed.
Preferably, the step S2 specifically:
Gray-scale deviation field is corrected and is modeled as following problems:
V (x)=u (x)+f (x)
Wherein v is given image, and u is the image after correcting, and f is deviation field, and x is the pixel coordinate of image;The problem
It is solved by following iterative process:
WhereinFor the remedial frames exported after nth iteration;For the deviation field estimation during nth iteration;S
{ } is smoothing operator, is fitted with B-spline curves;Remedial frames to be exported according to last iterative process provides
The desired value of current remedial frames;After n times iteration, ifIt has been restrained that, then calculating terminates, and takesIt is no for the image after correction
Then continue iteration.
Preferably, the step S3 specifically includes the following steps:
S31: the image that one of patient is selected in a large amount of patients carries out subsequent Histogram Matching as template, chooses
The principle of choosing is that the grey value profile of image is average;
S32: calculating the histogram of image and the histogram of template image, to the grey value profile situation of two images;
S33: optimal gray value mapping function is solved to match the grey value profile of two images by Dynamic Programming, is made
The gray value of image can map to the gray value of template image.
Preferably, the step S4 specifically includes the following steps:
S41: magnetic resonance three-dimensional image is converted into binary map for threshold value with 20;
S42: the summation of magnetic resonance three-dimensional image all pixels of each two dimensional image in z-axis after calculating binaryzation,
And draw curve;
S43: ROI region, which is divided into 2*2 in x/y plane, has the patch of overlapping as the input of depth network.
Preferably, the step S5 specifically includes the following steps:
S51: component network structure, the network structure include encoder, decoder and jump connection;
Encoder: the extraction of high-level abstract characteristics is carried out by convolutional network and down-sampling to input picture, image
It is encoded to the characteristic pattern that size only has original image 1/16;
Decoder: the characteristic pattern of encoder output is decoded by convolutional network and up-sampling, is exported identical as full size
3-D image, pixel value indicates that the pixel belongs to the probability in primary tumo(u)r region;
Jump connection: by encoder compared with the low resolution feature of higher level in the high-resolution features of shallow-layer and decoder
It is connected directly, solves the problems, such as that high-level characteristic middle high-resolution information is lost;
S52: by the magnetic resonance three-dimensional image of a large amount of patients through the step S5 treated image training deep semantic point
Network is cut, the input that deep semantic divides network is magnetic resonance three-dimensional image, the magnetic resonance three-dimensional picture size and input of output
Identical, pixel value range indicates that the pixel belongs to the probability in primary tumo(u)r region in [0,1].
Preferably, the step S8 specifically includes the following steps:
S81: the probability that each pixel belongs to primary tumo(u)r is initialized as deep semantic segmentation network by initial phase
Output result:
xi∈ L, L=0: it is not belonging to GTV, 1: belonging to GTV }
Wherein i is pixel coordinate position, xiFor the label of the pixel, Qi(xi) it be the label of the pixel is xiIt is general
Rate, ziFor normalization factor,Divide the output result of network for deep semantic;
S82: information transfer stages calculate m Gauss feature according to surrounding pixel to each pixel:
WhereinM-th of Gauss feature for the ith pixel calculated for label l, QjIt (l) is around the pixel the
The probability that j pixel tag is l, fiFor the feature vector of ith pixel;km(fi,fj) it is m-th of Gaussian kernel, for measuring not
With the similitude between pixel feature vector, ΛmFor the parameter of Gaussian kernel;
S83: information conformity stage, the feature for passing the information on stage calculating are integrated:
Wherein u (xi,xj)=[xi≠xj], indicate the compatibility between label, wmFor the weight of m-th of Gauss feature;
S84: the more new stage updates each pixel by following formula and belongs to the profile values of primary tumo(u)r, and is normalized, and makes
The range of probability is in [0,1];
S85: if Qi(xi) restrained, then calculating terminates, and otherwise continues iteration.
In the technical scheme, primary complete iterative process includes above-mentioned initial phase, information transfer stages, letter
Cease conformity stage and more new stage.After n times iteration, if Qi(xi) restrained, then calculating terminates, and otherwise continues iteration.
Compared with prior art, the beneficial effects of the present invention are:
(1) present invention utilizes multiple magnetic resonance three-dimensional sequences of patient, and carries out the correction of gray-scale deviation field, Histogram Matching
Equal data predictions step, can effectively improve the quality of input data, to improve the prediction effect of depth model.
(2) present invention uses deep learning method, and utilizes coder-decoder and the network structure of jump connection, energy
Learn the global information and detailed information to high-definition picture, in conjunction with the post-processing approach of mean field iteration, can effectively improve
The accuracy of prediction and the generalization ability of model.
(3) present invention implements the contour machining method of AI auxiliary in radiotherapy treatment planning workflow, can effectively improve
The working efficiency of medical worker, under the ever-increasing background of demand to radiotherapy, to shortage radiotherapy resource
Middle and low income country is particularly attractive.This method can be extended suitable for every other cancer types simultaneously, be changed to future
The workflow for becoming radiotherapy brings substantive propulsion.
Detailed description of the invention
The step of Fig. 1 is a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method for dividing network based on deep semantic of the invention
Flow chart;
Fig. 2 is a kind of depth for the automatic identification nasopharyngeal carcinoma primary tumo(u)r method for dividing network based on deep semantic of the present invention
Network structure;
Fig. 3 is a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method for dividing network based on deep semantic of the present invention
Patch ties schematic diagram.
Specific embodiment
The present invention is further illustrated With reference to embodiment.Wherein, attached drawing only for illustration,
What is indicated is only schematic diagram, rather than pictorial diagram, should not be understood as the limitation to this patent;Reality in order to better illustrate the present invention
Example is applied, the certain components of attached drawing have omission, zoom in or out, and do not represent the size of actual product;To those skilled in the art
For, the omitting of some known structures and their instructions in the attached drawings are understandable.
The same or similar label correspond to the same or similar components in the attached drawing of the embodiment of the present invention;It is retouched in of the invention
In stating, it is to be understood that if the orientation or positional relationship for having the instructions such as term " on ", "lower", "left", "right" is based on attached drawing
Shown in orientation or positional relationship, be merely for convenience of description of the present invention and simplification of the description, rather than indication or suggestion is signified
Device or element must have a particular orientation, be constructed and operated in a specific orientation, therefore positional relationship is described in attached drawing
Term only for illustration, should not be understood as the limitation to this patent, for the ordinary skill in the art, can
To understand the concrete meaning of above-mentioned term as the case may be.
Embodiment
Referring to Fig.1, a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network, including it is following
Step:
A: the magnetic resonance three-dimensional image of patient is acquired, and preliminary data are carried out to the magnetic resonance three-dimensional image and are located in advance
Reason;
B: the correction for carrying out gray-scale deviation field to the magnetic resonance three-dimensional image after data prediction is handled, and is made in same image
Gray value in identical tissue is more uniform;
C: being handled magnetic resonance three-dimensional image using improved Histogram Matching algorithm, is reflected by one gray scale of training
Function is penetrated, the histogram of image and the histogram for the template image selected in advance are matched, makes in different images identical group
Gray value in knitting is more close;
D: the ROI region of interception magnetic resonance three-dimensional image, and being divided into 2*2 has the patch of overlapping as the defeated of model
Enter;
E: constructing and training deep semantic segmentation network identifies primary tumo(u)r;
F: multiple patch are input to the identification that the deep semantic segmentation network trained carries out primary tumo(u)r;
G: the recognition result of multiple patch of output is merged, final primary tumo(u)r recognition result is obtained;
H: primary tumo(u)r recognition result is post-processed using mean field iterative algorithm.
A kind of one tool of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network of the invention
The workflow of body embodiment includes:
S1, data are ready for and are tentatively pre-processed;
Step S1 specifically includes the following steps:
S11, the magnetic resonance three-dimensional image for collecting patient, including four different data sequence T1, T1C, T1FSC, T2;
S12, it is handled using preliminary pre-treatment steps such as down-sampling, gray value standardization;
S13, through above-mentioned steps, treated that four sequence datas merge, constitute the three-dimensional figure for having four channels
As (tri- channels RGB of similar 2 D natural image).
S2, the correction for carrying out gray-scale deviation field to preliminary pretreated 3-D image are handled;
Step S2 specifically:
S21, the correction of gray-scale deviation field is modeled as following problems:
V (x)=u (x)+f (x)
Wherein v is given image, and u is the image after correcting, and f is deviation field, and x is the pixel coordinate of image.
S22, initializationCycle-index n=1, and control point set P;
S23, calculating
S24, update
If S25,It has been restrained that, then calculating terminates, and otherwise, enables n=n+1, and return to step S23.
WhereinFor the remedial frames exported after nth iteration;For the deviation field estimation during nth iteration;S
{ } is smoothing operator, is fitted with B-spline curves;Remedial frames to be exported according to last iterative process provides
The desired value of current remedial frames.S is calculated by following formula:
Wherein
SymbolTo be rounded downwards.nxFor the number of x direction controlling point, nyFor the number of y direction controlling point, nzFor z direction controlling point
Number.P is control point set, and B is B-spline curves.
S3, the Histogram Matching that the 3-D image after gray-scale deviation field is corrected improves is handled;
Step S3 specifically includes the following steps:
S31, the image that one of patient is selected in a large amount of patients carry out subsequent Histogram Matching as template, choose
The principle of choosing is that the grey value profile of image is average;
The histogram of S32, the histogram for calculating image and template image, obtain the grey value profile situation of two images;
S33, optimal gray value mapping function is solved to match the grey value profile of two images by Dynamic Programming, made
The gray value of image can map to the gray value of template image.Optimal gray value mapping function, which meets, makes following whole punishment
Function reaches minimum value:
D (0,0)=0
D (i, j)=∞ (i≤0, j≤0, (i, j) ≠ (0,0))
Wherein D (m, n) is that preceding m gray value and preceding n gray value in image B histogram are matched in image A histogram
Whole penalty, dK, l (m, n)For the local penalty of k-to-l mapping, i.e., the m-k+1 ash in image A histogram
Angle value into m-th gray scale and image B histogram the n-th-l+1 gray values to n-th of Gray-scale Matching local penalty, by
Following formula indicates:
WhereinWithThe frequency that m and n-th of gray value occur in image A and B (template image) is respectively represented,WithRespectively represent cumulative frequency.
S4, the region ROI (region of interest) is intercepted to Histogram Matching treated 3-D image, and divided
For multiple patch;
Step S4 specifically includes the following steps:
S41,3-D image is converted into binary map for threshold value with 20;
S42, calculate the 3-D image after binaryzation in z-axis all pixels of each two dimensional image (single slice) it is total
With, and draw curve;
S43, cut-off rule of first minimum point as neck in curve is taken, and the image of neck or more is taken to make
For ROI region;
S44, reference Fig. 3, ROI region, which is divided into 2*2 in x/y plane, has the patch (3-D image) of overlapping as deep
Spend the input of network.The original size that single slice (x/y plane) is illustrated in figure is 512*512, and the profile that centre is GTV is bent
Line covers this plane respectively with four patch having a size of 320*320, has overlapping between patch, especially in the region GTV.
S5, it constructs a kind of deep semantic segmentation network GTV is identified, and be trained;
Step S5 specifically includes following steps
S51, building network structure, comprising:
Encoder: the extraction of high-level abstract characteristics is carried out by convolutional network and down-sampling to input picture, image
It is encoded to the characteristic pattern that size only has original image 1/16.
Decoder: the characteristic pattern of encoder output is decoded by convolutional network and up-sampling, is exported identical as full size
3-D image, pixel value indicates that the pixel belongs to the probability in the region GTV.
Jump connection: by encoder compared with the low resolution feature of higher level in the high-resolution features of shallow-layer and decoder
It is connected directly, solves the problems, such as that high-level characteristic middle high-resolution information is lost.
S52, by a large amount of patients, through above-mentioned pre-treatment step, treated that image training deep semantic divides network.Net
The input of network is 3-D image, and the 3-D image size of output is identical as input, and pixel value range indicates the pixel in [0,1]
Belong to the probability in the region GTV.
S6, nasopharynx is carried out to the deep semantic segmentation network that multiple patch input after above-mentioned pre-treatment step has been trained
The identification of cancer primary tumo(u)r;
Step S6 specifically includes the following steps:
S61, the deep semantic segmentation network that multiple patch input of the patient has been trained is carried out nasopharyngeal carcinoma primary tumo(u)r
Identification.
S62, the recognition result of multiple patch of output (an even pixel while being belonged to multiple by averaging
Patch, then the predicted value of the pixel is mean value of these patch in the predicted value of the pixel) it merges, it obtains final
GTV recognition result.
S7, the recognition result application mean field iterative algorithm of depth network output is post-processed;
Step S7 specifically includes the following steps:
The probability that each pixel belongs to GTV is initialized as the defeated of deep semantic segmentation network by S71, initial phase
Result out:
xi∈ L, L=0: it is not belonging to GTV, 1: belonging to GTV }
Wherein i is pixel coordinate position, xiFor the label of the pixel, Qi(xi) it be the label of the pixel is xiIt is general
Rate, ziFor normalization factor,Divide the output result of network for deep semantic.
S72, information transfer stages calculate m Gauss feature according to surrounding pixel to each pixel:
WhereinM-th of Gauss feature for the ith pixel calculated for label l, QjIt (l) is around the pixel the
The probability that j pixel tag is l, fiFor the feature vector of ith pixel, position feature, color characteristic can be or through too deep
Spend the feature etc. of network code.km(fi, fj) it is m-th of Gaussian kernel, it is similar between different pixels point feature vector for measuring
Property, ΛmFor the parameter of Gaussian kernel.
S73, information conformity stage, the feature for passing the information on stage calculating are integrated:
Wherein u (xi, xj)=[xi≠xj], indicate the compatibility between label, wmFor the weight of m-th of Gauss feature.
S74, more new stage update each pixel by following formula and belong to the profile values of GTV, and be normalized, and make probability
Range in [0,1].
If S75, Qi(xi) restrained, then calculating terminates, and returns to step S72 and continue iteration.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair
The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description
To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this
Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention
Protection scope within.