CN109829885A - A kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network - Google Patents

A kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network Download PDF

Info

Publication number
CN109829885A
CN109829885A CN201811583585.9A CN201811583585A CN109829885A CN 109829885 A CN109829885 A CN 109829885A CN 201811583585 A CN201811583585 A CN 201811583585A CN 109829885 A CN109829885 A CN 109829885A
Authority
CN
China
Prior art keywords
image
pixel
primary tumo
deep semantic
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811583585.9A
Other languages
Chinese (zh)
Other versions
CN109829885B (en
Inventor
孙颖
陆遥
林丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perception Vision Medical Technology Co ltd
Original Assignee
Cancer Prevention Center Of Zhongshan University (affiliated Cancer Hospital Of Zhongshan University Zhongshan University Institute Of Oncology)
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cancer Prevention Center Of Zhongshan University (affiliated Cancer Hospital Of Zhongshan University Zhongshan University Institute Of Oncology), National Sun Yat Sen University filed Critical Cancer Prevention Center Of Zhongshan University (affiliated Cancer Hospital Of Zhongshan University Zhongshan University Institute Of Oncology)
Priority to CN201811583585.9A priority Critical patent/CN109829885B/en
Publication of CN109829885A publication Critical patent/CN109829885A/en
Application granted granted Critical
Publication of CN109829885B publication Critical patent/CN109829885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network, method includes: the magnetic resonance three-dimensional image for acquiring patient;The correction processing of gray-scale deviation field is carried out to 3-D image;3-D image after pre-treatment step is handled using improved Histogram Matching algorithm;ROI region is intercepted to the 3-D image after above-mentioned pre-treatment step, and be divided into 2*2 to have input of the patch of overlapping as model;The identification that nasopharyngeal carcinoma primary tumo(u)r is carried out to the deep semantic segmentation network that multiple patch input has been trained, the recognition result of multiple patch of output is finally merged, obtain final primary tumo(u)r recognition result.The present invention can effectively improve the quality of input data, and learn that the global information and detailed information of high-definition picture, the accuracy of prediction and the generalization ability of model can be effectively improved in conjunction with post-processing approach, and then effectively improve the working efficiency of medical worker.

Description

A kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network
Technical field
The present invention relates to field of image processings, deep learning field, medical field, especially a kind of based on deep semantic point Cut the automatic identification nasopharyngeal carcinoma primary tumo(u)r method of network.
Background technique
In medical domain, strength regulated shape-adapted radiation therapeutic technology can greatly improve the survival rate and life matter of cancer patient Amount.But this advanced treatment method needs accurately to judge the profile of target tumor, the system of radiation treatment plan Surely it needs to spend radiotherapy department doctor's time a few hours.Simultaneously, it is contemplated that in next ten years, cancer morbidity rises It will continue the exacerbation of leading global medical insurance burden.Especially it can recorde every year in China and south east asia according to statistics Up to 60000 new nasopharyngeal carcinoma cases.Since rhinitis adjoins basis cranii, there is basis cranii when medical in 60%~70% patient The destruction of sclerotin, 18% patient even with encephalic and or cavernous sinus infringement, operation is difficult.Simultaneously 85% patient with The transfer of neck or throat lymph node is not suitable for operation.Nasopharyngeal carcinoma is divided into three types by World Health Organization: angling squamous cell Cancer, non-cancroid, undifferentiated carcinoma.In China and south east asia, 95% nasopharyngeal carcinoma is undifferentiated carcinoma, in remaining 5% absolutely Most of is non-cancroid, has Medium sensitivity to radiotherapy, radiotherapy is the primary treatments of nasopharyngeal carcinoma.To protect Card enables more patients to receive timely and effectively radiotherapy under limited medical therapy resource, simplifies radiotherapy doctor Workflow, improve Radiation treatment plans formulation efficiency it is most important.
In recent years, people produce keen interest to the diagnosis for exploring artificial intelligence auxiliary doctor and carrying out disease, and Certain fields establish the mathematical model that performance is better than human expert using AI algorithm.Wherein, primary in description nasopharyngeal carcinoma automatically Property tumour Primary Study in, it can be found that artificial intelligence (AI) be used as a kind of strong method, normal tissue segmentation appoint Comparable advantage is shown in business.Therefore it is drawn it is believed that constructing the tumor's profiles based on AI using deep learning Tool implements the contour machining method of AI auxiliary in radiotherapy treatment planning workflow, can effectively improve medical worker Working efficiency.Under the ever-increasing background of demand to radiotherapy, to the middle and low income country for lacking radiotherapy resource It is particularly attractive.This method can be extended suitable for every other cancer types simultaneously, to the following change radiotherapy Workflow brings substantive propulsion.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of based on the automatic of deep semantic segmentation network Identify nasopharyngeal carcinoma primary tumo(u)r method.
In order to solve the above technical problems, the technical solution adopted by the present invention is that:
A kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network, comprising the following steps:
S1: the magnetic resonance three-dimensional image of patient is acquired, and preliminary data are carried out to the magnetic resonance three-dimensional image and are located in advance Reason;
S2: the correction for carrying out gray-scale deviation field to the magnetic resonance three-dimensional image after data prediction is handled, and makes same image In gray value in identical tissue it is more uniform;
S3: being handled magnetic resonance three-dimensional image using improved Histogram Matching algorithm, passes through one gray scale of training Mapping function matches the histogram of image and the histogram for the template image selected in advance, makes identical in different images Gray value in tissue is more close;
S4: the ROI region of interception magnetic resonance three-dimensional image, and being divided into 2*2 has the patch of overlapping as the defeated of model Enter;
S5: constructing and training deep semantic segmentation network identifies primary tumo(u)r;
S6: multiple patch are input to the identification that the deep semantic segmentation network trained carries out primary tumo(u)r;
S7: the recognition result of multiple patch of output is merged, final primary tumo(u)r recognition result is obtained;
S8: primary tumo(u)r recognition result is post-processed using mean field iterative algorithm.
Preferably, in step sl, the magnetic resonance three-dimensional image include four different data sequence T1, T1C, T1FSC and T2, the data of each sequence are a 3-D image, and utilize down-sampling, specification to this four 3-D images Change is pre-processed.
Preferably, the step S2 specifically:
Gray-scale deviation field is corrected and is modeled as following problems:
V (x)=u (x)+f (x)
Wherein v is given image, and u is the image after correcting, and f is deviation field, and x is the pixel coordinate of image;The problem It is solved by following iterative process:
WhereinFor the remedial frames exported after nth iteration;For the deviation field estimation during nth iteration;S { } is smoothing operator, is fitted with B-spline curves;Remedial frames to be exported according to last iterative process provides The desired value of current remedial frames;After n times iteration, ifIt has been restrained that, then calculating terminates, and takesIt is no for the image after correction Then continue iteration.
Preferably, the step S3 specifically includes the following steps:
S31: the image that one of patient is selected in a large amount of patients carries out subsequent Histogram Matching as template, chooses The principle of choosing is that the grey value profile of image is average;
S32: calculating the histogram of image and the histogram of template image, to the grey value profile situation of two images;
S33: optimal gray value mapping function is solved to match the grey value profile of two images by Dynamic Programming, is made The gray value of image can map to the gray value of template image.
Preferably, the step S4 specifically includes the following steps:
S41: magnetic resonance three-dimensional image is converted into binary map for threshold value with 20;
S42: the summation of magnetic resonance three-dimensional image all pixels of each two dimensional image in z-axis after calculating binaryzation, And draw curve;
S43: ROI region, which is divided into 2*2 in x/y plane, has the patch of overlapping as the input of depth network.
Preferably, the step S5 specifically includes the following steps:
S51: component network structure, the network structure include encoder, decoder and jump connection;
Encoder: the extraction of high-level abstract characteristics is carried out by convolutional network and down-sampling to input picture, image It is encoded to the characteristic pattern that size only has original image 1/16;
Decoder: the characteristic pattern of encoder output is decoded by convolutional network and up-sampling, is exported identical as full size 3-D image, pixel value indicates that the pixel belongs to the probability in primary tumo(u)r region;
Jump connection: by encoder compared with the low resolution feature of higher level in the high-resolution features of shallow-layer and decoder It is connected directly, solves the problems, such as that high-level characteristic middle high-resolution information is lost;
S52: by the magnetic resonance three-dimensional image of a large amount of patients through the step S5 treated image training deep semantic point Network is cut, the input that deep semantic divides network is magnetic resonance three-dimensional image, the magnetic resonance three-dimensional picture size and input of output Identical, pixel value range indicates that the pixel belongs to the probability in primary tumo(u)r region in [0,1].
Preferably, the step S8 specifically includes the following steps:
S81: the probability that each pixel belongs to primary tumo(u)r is initialized as deep semantic segmentation network by initial phase Output result:
xi∈ L, L=0: it is not belonging to GTV, 1: belonging to GTV }
Wherein i is pixel coordinate position, xiFor the label of the pixel, Qi(xi) it be the label of the pixel is xiIt is general Rate, ziFor normalization factor,Divide the output result of network for deep semantic;
S82: information transfer stages calculate m Gauss feature according to surrounding pixel to each pixel:
WhereinM-th of Gauss feature for the ith pixel calculated for label l, QjIt (l) is around the pixel the The probability that j pixel tag is l, fiFor the feature vector of ith pixel;km(fi,fj) it is m-th of Gaussian kernel, for measuring not With the similitude between pixel feature vector, ΛmFor the parameter of Gaussian kernel;
S83: information conformity stage, the feature for passing the information on stage calculating are integrated:
Wherein u (xi,xj)=[xi≠xj], indicate the compatibility between label, wmFor the weight of m-th of Gauss feature;
S84: the more new stage updates each pixel by following formula and belongs to the profile values of primary tumo(u)r, and is normalized, and makes The range of probability is in [0,1];
S85: if Qi(xi) restrained, then calculating terminates, and otherwise continues iteration.
In the technical scheme, primary complete iterative process includes above-mentioned initial phase, information transfer stages, letter Cease conformity stage and more new stage.After n times iteration, if Qi(xi) restrained, then calculating terminates, and otherwise continues iteration.
Compared with prior art, the beneficial effects of the present invention are:
(1) present invention utilizes multiple magnetic resonance three-dimensional sequences of patient, and carries out the correction of gray-scale deviation field, Histogram Matching Equal data predictions step, can effectively improve the quality of input data, to improve the prediction effect of depth model.
(2) present invention uses deep learning method, and utilizes coder-decoder and the network structure of jump connection, energy Learn the global information and detailed information to high-definition picture, in conjunction with the post-processing approach of mean field iteration, can effectively improve The accuracy of prediction and the generalization ability of model.
(3) present invention implements the contour machining method of AI auxiliary in radiotherapy treatment planning workflow, can effectively improve The working efficiency of medical worker, under the ever-increasing background of demand to radiotherapy, to shortage radiotherapy resource Middle and low income country is particularly attractive.This method can be extended suitable for every other cancer types simultaneously, be changed to future The workflow for becoming radiotherapy brings substantive propulsion.
Detailed description of the invention
The step of Fig. 1 is a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method for dividing network based on deep semantic of the invention Flow chart;
Fig. 2 is a kind of depth for the automatic identification nasopharyngeal carcinoma primary tumo(u)r method for dividing network based on deep semantic of the present invention Network structure;
Fig. 3 is a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method for dividing network based on deep semantic of the present invention Patch ties schematic diagram.
Specific embodiment
The present invention is further illustrated With reference to embodiment.Wherein, attached drawing only for illustration, What is indicated is only schematic diagram, rather than pictorial diagram, should not be understood as the limitation to this patent;Reality in order to better illustrate the present invention Example is applied, the certain components of attached drawing have omission, zoom in or out, and do not represent the size of actual product;To those skilled in the art For, the omitting of some known structures and their instructions in the attached drawings are understandable.
The same or similar label correspond to the same or similar components in the attached drawing of the embodiment of the present invention;It is retouched in of the invention In stating, it is to be understood that if the orientation or positional relationship for having the instructions such as term " on ", "lower", "left", "right" is based on attached drawing Shown in orientation or positional relationship, be merely for convenience of description of the present invention and simplification of the description, rather than indication or suggestion is signified Device or element must have a particular orientation, be constructed and operated in a specific orientation, therefore positional relationship is described in attached drawing Term only for illustration, should not be understood as the limitation to this patent, for the ordinary skill in the art, can To understand the concrete meaning of above-mentioned term as the case may be.
Embodiment
Referring to Fig.1, a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network, including it is following Step:
A: the magnetic resonance three-dimensional image of patient is acquired, and preliminary data are carried out to the magnetic resonance three-dimensional image and are located in advance Reason;
B: the correction for carrying out gray-scale deviation field to the magnetic resonance three-dimensional image after data prediction is handled, and is made in same image Gray value in identical tissue is more uniform;
C: being handled magnetic resonance three-dimensional image using improved Histogram Matching algorithm, is reflected by one gray scale of training Function is penetrated, the histogram of image and the histogram for the template image selected in advance are matched, makes in different images identical group Gray value in knitting is more close;
D: the ROI region of interception magnetic resonance three-dimensional image, and being divided into 2*2 has the patch of overlapping as the defeated of model Enter;
E: constructing and training deep semantic segmentation network identifies primary tumo(u)r;
F: multiple patch are input to the identification that the deep semantic segmentation network trained carries out primary tumo(u)r;
G: the recognition result of multiple patch of output is merged, final primary tumo(u)r recognition result is obtained;
H: primary tumo(u)r recognition result is post-processed using mean field iterative algorithm.
A kind of one tool of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network of the invention The workflow of body embodiment includes:
S1, data are ready for and are tentatively pre-processed;
Step S1 specifically includes the following steps:
S11, the magnetic resonance three-dimensional image for collecting patient, including four different data sequence T1, T1C, T1FSC, T2;
S12, it is handled using preliminary pre-treatment steps such as down-sampling, gray value standardization;
S13, through above-mentioned steps, treated that four sequence datas merge, constitute the three-dimensional figure for having four channels As (tri- channels RGB of similar 2 D natural image).
S2, the correction for carrying out gray-scale deviation field to preliminary pretreated 3-D image are handled;
Step S2 specifically:
S21, the correction of gray-scale deviation field is modeled as following problems:
V (x)=u (x)+f (x)
Wherein v is given image, and u is the image after correcting, and f is deviation field, and x is the pixel coordinate of image.
S22, initializationCycle-index n=1, and control point set P;
S23, calculating
S24, update
If S25,It has been restrained that, then calculating terminates, and otherwise, enables n=n+1, and return to step S23.
WhereinFor the remedial frames exported after nth iteration;For the deviation field estimation during nth iteration;S { } is smoothing operator, is fitted with B-spline curves;Remedial frames to be exported according to last iterative process provides The desired value of current remedial frames.S is calculated by following formula:
Wherein SymbolTo be rounded downwards.nxFor the number of x direction controlling point, nyFor the number of y direction controlling point, nzFor z direction controlling point Number.P is control point set, and B is B-spline curves.
S3, the Histogram Matching that the 3-D image after gray-scale deviation field is corrected improves is handled;
Step S3 specifically includes the following steps:
S31, the image that one of patient is selected in a large amount of patients carry out subsequent Histogram Matching as template, choose The principle of choosing is that the grey value profile of image is average;
The histogram of S32, the histogram for calculating image and template image, obtain the grey value profile situation of two images;
S33, optimal gray value mapping function is solved to match the grey value profile of two images by Dynamic Programming, made The gray value of image can map to the gray value of template image.Optimal gray value mapping function, which meets, makes following whole punishment Function reaches minimum value:
D (0,0)=0
D (i, j)=∞ (i≤0, j≤0, (i, j) ≠ (0,0))
Wherein D (m, n) is that preceding m gray value and preceding n gray value in image B histogram are matched in image A histogram Whole penalty, dK, l (m, n)For the local penalty of k-to-l mapping, i.e., the m-k+1 ash in image A histogram Angle value into m-th gray scale and image B histogram the n-th-l+1 gray values to n-th of Gray-scale Matching local penalty, by Following formula indicates:
WhereinWithThe frequency that m and n-th of gray value occur in image A and B (template image) is respectively represented,WithRespectively represent cumulative frequency.
S4, the region ROI (region of interest) is intercepted to Histogram Matching treated 3-D image, and divided For multiple patch;
Step S4 specifically includes the following steps:
S41,3-D image is converted into binary map for threshold value with 20;
S42, calculate the 3-D image after binaryzation in z-axis all pixels of each two dimensional image (single slice) it is total With, and draw curve;
S43, cut-off rule of first minimum point as neck in curve is taken, and the image of neck or more is taken to make For ROI region;
S44, reference Fig. 3, ROI region, which is divided into 2*2 in x/y plane, has the patch (3-D image) of overlapping as deep Spend the input of network.The original size that single slice (x/y plane) is illustrated in figure is 512*512, and the profile that centre is GTV is bent Line covers this plane respectively with four patch having a size of 320*320, has overlapping between patch, especially in the region GTV.
S5, it constructs a kind of deep semantic segmentation network GTV is identified, and be trained;
Step S5 specifically includes following steps
S51, building network structure, comprising:
Encoder: the extraction of high-level abstract characteristics is carried out by convolutional network and down-sampling to input picture, image It is encoded to the characteristic pattern that size only has original image 1/16.
Decoder: the characteristic pattern of encoder output is decoded by convolutional network and up-sampling, is exported identical as full size 3-D image, pixel value indicates that the pixel belongs to the probability in the region GTV.
Jump connection: by encoder compared with the low resolution feature of higher level in the high-resolution features of shallow-layer and decoder It is connected directly, solves the problems, such as that high-level characteristic middle high-resolution information is lost.
S52, by a large amount of patients, through above-mentioned pre-treatment step, treated that image training deep semantic divides network.Net The input of network is 3-D image, and the 3-D image size of output is identical as input, and pixel value range indicates the pixel in [0,1] Belong to the probability in the region GTV.
S6, nasopharynx is carried out to the deep semantic segmentation network that multiple patch input after above-mentioned pre-treatment step has been trained The identification of cancer primary tumo(u)r;
Step S6 specifically includes the following steps:
S61, the deep semantic segmentation network that multiple patch input of the patient has been trained is carried out nasopharyngeal carcinoma primary tumo(u)r Identification.
S62, the recognition result of multiple patch of output (an even pixel while being belonged to multiple by averaging Patch, then the predicted value of the pixel is mean value of these patch in the predicted value of the pixel) it merges, it obtains final GTV recognition result.
S7, the recognition result application mean field iterative algorithm of depth network output is post-processed;
Step S7 specifically includes the following steps:
The probability that each pixel belongs to GTV is initialized as the defeated of deep semantic segmentation network by S71, initial phase Result out:
xi∈ L, L=0: it is not belonging to GTV, 1: belonging to GTV }
Wherein i is pixel coordinate position, xiFor the label of the pixel, Qi(xi) it be the label of the pixel is xiIt is general Rate, ziFor normalization factor,Divide the output result of network for deep semantic.
S72, information transfer stages calculate m Gauss feature according to surrounding pixel to each pixel:
WhereinM-th of Gauss feature for the ith pixel calculated for label l, QjIt (l) is around the pixel the The probability that j pixel tag is l, fiFor the feature vector of ith pixel, position feature, color characteristic can be or through too deep Spend the feature etc. of network code.km(fi, fj) it is m-th of Gaussian kernel, it is similar between different pixels point feature vector for measuring Property, ΛmFor the parameter of Gaussian kernel.
S73, information conformity stage, the feature for passing the information on stage calculating are integrated:
Wherein u (xi, xj)=[xi≠xj], indicate the compatibility between label, wmFor the weight of m-th of Gauss feature.
S74, more new stage update each pixel by following formula and belong to the profile values of GTV, and be normalized, and make probability Range in [0,1].
If S75, Qi(xi) restrained, then calculating terminates, and returns to step S72 and continue iteration.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention Protection scope within.

Claims (7)

1. it is a kind of based on deep semantic segmentation network automatic identification nasopharyngeal carcinoma primary tumo(u)r method, which is characterized in that including with Lower step:
S1: the magnetic resonance three-dimensional image of patient is acquired, and preliminary data prediction is carried out to the magnetic resonance three-dimensional image;
S2: the correction for carrying out gray-scale deviation field to the magnetic resonance three-dimensional image after data prediction is handled, and makes phase in same image It is more uniform with the gray value in tissue;
S3: being handled magnetic resonance three-dimensional image using improved Histogram Matching algorithm, passes through one grey scale mapping of training Function matches the histogram of image and the histogram for the template image selected in advance, makes identical tissue in different images Interior gray value is more close;
S4: the ROI region of interception magnetic resonance three-dimensional image, and be divided into 2*2 and have input of the patch of overlapping as model;
S5: constructing and training deep semantic segmentation network identifies primary tumo(u)r;
S6: multiple patch are input to the identification that the deep semantic segmentation network trained carries out primary tumo(u)r;
S7: the recognition result of multiple patch of output is merged, final primary tumo(u)r recognition result is obtained;
S8: primary tumo(u)r recognition result is post-processed using mean field iterative algorithm.
2. a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r side based on deep semantic segmentation network according to claim 1 Method, it is characterised in that: in step sl, the magnetic resonance three-dimensional image include four different data sequence T1, T1C, T1FSC and T2, the data of each sequence are a 3-D image, and utilize down-sampling, specification to this four 3-D images Change is pre-processed.
3. a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r side based on deep semantic segmentation network according to claim 1 Method, which is characterized in that the step S2 specifically:
Gray-scale deviation field is corrected and is modeled as following problems:
V (x)=u (x)+f (x)
Wherein v is given image, and u is the image after correcting, and f is deviation field, and x is the pixel coordinate of image;The problem passes through Following iterative process is solved:
WhereinFor the remedial frames exported after nth iteration;For the deviation field estimation during nth iteration;
S { } is smoothing operator, is fitted with B-spline curves;For the correction figure exported according to last iterative process Desired value as providing current remedial frames;After n times iteration, ifIt has been restrained that, then calculating terminates, and takesAfter correcting Otherwise image continues iteration.
4. a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r side based on deep semantic segmentation network according to claim 1 Method, which is characterized in that the step S3 specifically includes the following steps:
S31: the image that one of patient is selected in a large amount of patients carries out subsequent Histogram Matching as template, selects Principle is that the grey value profile of image is average;
S32: calculating the histogram of image and the histogram of template image, to the grey value profile situation of two images;
S33: optimal gray value mapping function is solved to match the grey value profile of two images by Dynamic Programming, makes image Gray value can map to the gray value of template image.
5. a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r side based on deep semantic segmentation network according to claim 1 Method, which is characterized in that the step S4 specifically includes the following steps:
S41: magnetic resonance three-dimensional image is converted into binary map for threshold value with 20;
S42: the summation of magnetic resonance three-dimensional image all pixels of each two dimensional image in z-axis after calculating binaryzation, and draw Koji-making line;
S43: ROI region, which is divided into 2*2 in x/y plane, has the patch of overlapping as the input of depth network.
6. a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r side based on deep semantic segmentation network according to claim 1 Method, which is characterized in that the step S5 specifically includes the following steps:
S51: component network structure, the network structure include encoder, decoder and jump connection;
Encoder: the extraction of high-level abstract characteristics is carried out by convolutional network and down-sampling to input picture, image is encoded There was only the characteristic pattern of original image 1/16 for size;
Decoder: the characteristic pattern of encoder output is decoded by convolutional network and up-sampling, output identical with full size three Image is tieed up, pixel value indicates that the pixel belongs to the probability in primary tumo(u)r region;
Jump connection: the low resolution feature in encoder compared with higher level in the high-resolution features of shallow-layer and decoder is direct It is connected, solves the problems, such as that high-level characteristic middle high-resolution information is lost;
S52: the magnetic resonance three-dimensional image of a large amount of patients is divided into net through the step S5 treated image training deep semantic Network, the input that deep semantic divides network is magnetic resonance three-dimensional image, and the magnetic resonance three-dimensional picture size of output is identical as input, Pixel value range indicates that the pixel belongs to the probability in primary tumo(u)r region in [0,1].
7. a kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r side based on deep semantic segmentation network according to claim 1 Method, which is characterized in that the step S8 specifically includes the following steps:
S81: the probability that each pixel belongs to primary tumo(u)r is initialized as the defeated of deep semantic segmentation network by initial phase Result out:
zi∈ L, L=0: it is not belonging to GTV, 1: belonging to GTV }
Wherein i is pixel coordinate position, xiFor the label of the pixel, Qi(xi) it be the label of the pixel is xiProbability, zi For normalization factor,Divide the output result of network for deep semantic;
S82: information transfer stages calculate m Gauss feature according to surrounding pixel to each pixel:
WhereinM-th of Gauss feature for the ith pixel calculated for label l, QjIt (l) is j-th around the pixel Pixel tag is the probability of l, fiFor the feature vector of ith pixel;km(fi, fj) it is m-th of Gaussian kernel, for measuring difference Similitude between pixel feature vector, ΛmFor the parameter of Gaussian kernel;
S83: information conformity stage, the feature for passing the information on stage calculating are integrated:
Wherein u (xi, xj)=[xi≠xj], indicate the compatibility between label, wmFor the weight of m-th of Gauss feature;
S84: the more new stage updates each pixel by following formula and belongs to the profile values of primary tumo(u)r, and is normalized, and makes probability Range in [0,1];
S85: if Qi(xi) restrained, then calculating terminates, and otherwise continues iteration.
CN201811583585.9A 2018-12-24 2018-12-24 Method for automatically identifying primary tumor of nasopharyngeal carcinoma based on deep semantic segmentation network Active CN109829885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811583585.9A CN109829885B (en) 2018-12-24 2018-12-24 Method for automatically identifying primary tumor of nasopharyngeal carcinoma based on deep semantic segmentation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811583585.9A CN109829885B (en) 2018-12-24 2018-12-24 Method for automatically identifying primary tumor of nasopharyngeal carcinoma based on deep semantic segmentation network

Publications (2)

Publication Number Publication Date
CN109829885A true CN109829885A (en) 2019-05-31
CN109829885B CN109829885B (en) 2022-07-22

Family

ID=66860687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811583585.9A Active CN109829885B (en) 2018-12-24 2018-12-24 Method for automatically identifying primary tumor of nasopharyngeal carcinoma based on deep semantic segmentation network

Country Status (1)

Country Link
CN (1) CN109829885B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991339A (en) * 2019-12-02 2020-04-10 太原科技大学 Three-dimensional puckery palate identification method adopting circular spectrum
CN111091560A (en) * 2019-12-19 2020-05-01 广州柏视医疗科技有限公司 Nasopharyngeal carcinoma primary tumor image identification method and system
CN117173092A (en) * 2023-06-28 2023-12-05 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Nasopharyngeal carcinoma radiotherapy method and system based on image processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989371A (en) * 2015-03-03 2016-10-05 香港中文大学深圳研究院 Grayscale normalization method and apparatus for nuclear magnetic resonance image
CN107180430A (en) * 2017-05-16 2017-09-19 华中科技大学 A kind of deep learning network establishing method and system suitable for semantic segmentation
US20170334961A1 (en) * 2009-07-17 2017-11-23 Rigshospitalet Masp isoforms as inhibitors of complement activation
CN107464250A (en) * 2017-07-03 2017-12-12 深圳市第二人民医院 Tumor of breast automatic division method based on three-dimensional MRI image
CN109063710A (en) * 2018-08-09 2018-12-21 成都信息工程大学 Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170334961A1 (en) * 2009-07-17 2017-11-23 Rigshospitalet Masp isoforms as inhibitors of complement activation
CN105989371A (en) * 2015-03-03 2016-10-05 香港中文大学深圳研究院 Grayscale normalization method and apparatus for nuclear magnetic resonance image
CN107180430A (en) * 2017-05-16 2017-09-19 华中科技大学 A kind of deep learning network establishing method and system suitable for semantic segmentation
CN107464250A (en) * 2017-07-03 2017-12-12 深圳市第二人民医院 Tumor of breast automatic division method based on three-dimensional MRI image
CN109063710A (en) * 2018-08-09 2018-12-21 成都信息工程大学 Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIAYUN LI ET AL.: ""A Multi-scale U-Net for Semantic Segmentation of Histologica Images from Radical Prostatectomies"", 《AMIA ANNU SYMP PROC》 *
NICOLAS AUDEBERT ET AL.: ""Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks"", 《ASIAN CONFERENCE ON COMPUTER VISION》 *
PHILIPP KRAHENBUHL ET AL.: ""Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials"", 《ARXIV》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991339A (en) * 2019-12-02 2020-04-10 太原科技大学 Three-dimensional puckery palate identification method adopting circular spectrum
CN111091560A (en) * 2019-12-19 2020-05-01 广州柏视医疗科技有限公司 Nasopharyngeal carcinoma primary tumor image identification method and system
CN117173092A (en) * 2023-06-28 2023-12-05 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Nasopharyngeal carcinoma radiotherapy method and system based on image processing
CN117173092B (en) * 2023-06-28 2024-04-09 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Nasopharyngeal carcinoma radiotherapy method and system based on image processing

Also Published As

Publication number Publication date
CN109829885B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
US11386557B2 (en) Systems and methods for segmentation of intra-patient medical images
US10546014B2 (en) Systems and methods for segmenting medical images based on anatomical landmark-based features
Cao et al. Deformable image registration using a cue-aware deep regression network
EP3488381B1 (en) Method and system for artificial intelligence based medical image segmentation
Kearney et al. An unsupervised convolutional neural network-based algorithm for deformable image registration
CN106683104B (en) Prostate Magnetic Resonance Image Segmentation method based on integrated depth convolutional neural networks
CN109584251A (en) A kind of tongue body image partition method based on single goal region segmentation
Commowick et al. An efficient locally affine framework for the smooth registration of anatomical structures
Chen et al. A recursive ensemble organ segmentation (REOS) framework: application in brain radiotherapy
CN111784706B (en) Automatic identification method and system for primary tumor image of nasopharyngeal carcinoma
CN109829885A (en) A kind of automatic identification nasopharyngeal carcinoma primary tumo(u)r method based on deep semantic segmentation network
EP3107031A1 (en) Method, apparatus and system for spine labeling
Rodríguez Colmeiro et al. Multimodal brain tumor segmentation using 3D convolutional networks
CN111091560A (en) Nasopharyngeal carcinoma primary tumor image identification method and system
Ansari et al. Multiple sclerosis lesion segmentation in brain MRI using inception modules embedded in a convolutional neural network
Duan et al. Unsupervised learning for deformable registration of thoracic CT and cone‐beam CT based on multiscale features matching with spatially adaptive weighting
Sreeja et al. Image fusion through deep convolutional neural network
CN102663728B (en) Dictionary learning-based medical image interactive joint segmentation
CN115063397A (en) Computer-aided image analysis method, computer device and storage medium
CN110232684B (en) Automatic three-dimensional medical image segmentation method based on spectrum analysis
CN115239740A (en) GT-UNet-based full-center segmentation algorithm
CN106934785A (en) The medical image cutting method of hepatic model in a kind of training system for Robot Virtual
Ansari et al. Research Article Multiple Sclerosis Lesion Segmentation in Brain MRI Using Inception Modules Embedded in a Convolutional Neural Network
Xu et al. Lightweight Multi-scale Transformer for Automatic Skin Lesions Segmentation
Kunkyab et al. A deep learning‐based framework (Co‐ReTr) for auto‐segmentation of non‐small cell‐lung cancer in computed tomography images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211124

Address after: 510530 room 306, phase I office building, 12 Yuyan Road, Huangpu District, Guangzhou City, Guangdong Province

Applicant after: PERCEPTION VISION MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 510275 No. 135 West Xingang Road, Guangzhou, Guangdong, Haizhuqu District

Applicant before: SUN YAT-SEN University

Applicant before: SUN YAT SEN University CANCER CENTER (SUN YAT SEN University AFFILIATED TO CANCER CENTER SUN YAT SEN UNIVERSITY CANCER INSTITUTE)

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant