CN104008541A - Automatic segmentation method for PET/CT tumor target area - Google Patents
Automatic segmentation method for PET/CT tumor target area Download PDFInfo
- Publication number
- CN104008541A CN104008541A CN201410189200.6A CN201410189200A CN104008541A CN 104008541 A CN104008541 A CN 104008541A CN 201410189200 A CN201410189200 A CN 201410189200A CN 104008541 A CN104008541 A CN 104008541A
- Authority
- CN
- China
- Prior art keywords
- pet
- image
- pixel
- split
- target area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses an automatic segmentation method for a PET/CT tumor target area. The method includes the steps that step1, the background of a PET/CT image to be segmented is removed to obtain a grey level histogram; step2, normalization statistics are performed on the grey level histogram, curve fitting is performed so that a fitted curve with three wave crests and two wave troughs can be obtained, and a corresponding large grey value in the two wave troughs is taken as the optimal ignition threshold value theta; step3, a optimized PCNN model is established; step4, all internal activity items Uij<n> of pixel points which belong to an ignition sequence are compared with the optimal ignition threshold value theta, if Uij<n> is larger than theta, Y(I, j) <n> is ordered to be equal to one, if Uij<n> is smaller than or equal to theta, Y(I, j) <n> is ordered to be equal to zero, and in other words, the pixel points which do not meet the condition are removed out of the ignition sequence; step5, the step4 is repeated until the pixel points which belong to the ignition sequence are kept unchangeable; step6, the pixel points, which belongs to the ignition sequence, in the PET/CT image to be segmented are judged to belong to the tumor area. The automatic segmentation method has the advantages of being high in generality, accurate in segmentation result and low in time consumption.
Description
Technical field
The present invention relates to a kind of PET/CT tumour target area automatic division method.
Background technology
The application of PET/CT functional image in radiotherapy is more and more extensive, because it can provide the state such as metabolism, propagation, weary oxygen of tissue and cell, for understanding more comprehensively the functional status of tumour and normal structure, determining that tumor boundaries and lymphatic metastasis provide powerful measure more accurately.PET/CT metabolism target area can instruct traditional CT target area to delineate, and accurately defines tumour target area scope; Can prevent from recurring and reducing in tumour open country the damage of normal structure according to PET/CT target area formulation radiotherapy planning.There are some researches show, PET/CT is applied to during clinical target area delineates, by stages the changing to some extent with range of exposures of about 1/3 tumour patient, 25-30% Radiotherapy dosimetry distributes and adjusts to some extent.
Although PET/CT has demonstrated in tumour target area is determined and has had important value, because PET/CT acquisition time is long, affected seriously by breathing, heartbeat etc., image resolution ratio is low, has limited the application of PET/CT in radiotherapy.The low tumour target area that causes of picture contrast is difficult to distinguish with perienchyma, even for experienced clinician, cuts apart by hand PET/CT image and remains difficulty and work consuming time.Therefore very large to the demand of PET/CT image target area automatic division method in clinical.
The method of PET/CT target area auto Segmentation mainly comprises absolute threshold method, relative threshold method, ratioing technigue, halo method and equation at present.These methods can be summarized in two classes: a class is direct setting threshold, and a class is according to formula calculated threshold, more automatically delineate target area by planning system according to threshold condition.The focus target area that these methods are applicable under particular case is delineated, and does not have versatility.
Pulse Coupled Neural Network (Pulse Coupled Neural Network, PCNN) be the visual cortex neuron train of impulses synchronized oscillation phenomenon based on to cat research and development and come neural network model, be widely used in the fields such as image processing, pattern-recognition.Because of its Basic of Biology, there is the neuron simultaneous shots of similar characteristic, the similar recognition effect of human eye vision is followed in block information, the generation that can retain preferably image, and the image of low resolution is had to good segmentation effect.
Summary of the invention
Technical matters to be solved by this invention is: provide a kind of PET/CT tumour target area automatic division method, to overcome automatic division method versatility poor problem in PET/CT target area in prior art.
Solve the problems of the technologies described above, the technical solution adopted in the present invention is as follows:
A kind of PET/CT tumour target area automatic division method, comprises the following steps:
Step 1, remove the background of PET/CT image to be split, and obtain the grey level histogram of removing PET/CT image to be split after background;
Step 2, described grey level histogram is normalized to statistics, and adopt six functions to carry out curve fitting to it, there is the matched curve of three crests and two troughs to obtain, get the optimum igniting threshold values θ of a gray-scale value corresponding to institute in described two troughs as division tumour pixel and non-tumour pixel;
Step 3, set up that to optimize PCNN model as follows:
F
ij[n]=S
ij;
13≤δ≤24, and δ is integer;
U
ij[n]=F
ij[n]*L
ij[n];
Wherein, n represents to optimize the iterations of PCNN model, and it is positive integer; I and j represent respectively the row ordinal sum row ordinal number of pixel in described PET/CT image to be split; S
ijfor the gray-scale value of the capable j row of i pixel in PET/CT image to be split; δ is weight parameter; F
ij[n], L
ij[n] and U
ij[n] represents respectively feed back input, link input and the internal activity item of the capable j row of i pixel in the time of the n time iteration in PET/CT image to be split; Y
(i, j)[n-1] represents the pulse output of the capable j row of i pixel in the time of the n-1 time iteration in PET/CT image to be split, Y
(i, j)[n]=1 represents that in PET/CT image to be split, the capable j row of i pixel belongs to igniting sequence, and all pixels of PET/CT image to be split all belong to igniting sequence, i.e. Y at initial time
ij[0]=1;
Step 4, by PET/CT image to be split, each belongs to the internal activity item U of the pixel of the sequence of lighting a fire
ij[n] and described optimum igniting threshold values θ compare, if meet U
ijthe condition of [n] > θ, makes Y
(i, j)[n]=1, the pixel that is about to satisfy condition remains in igniting sequence, otherwise makes Y
(i, j)[n]=0, the pixel that is about to not satisfy condition shifts out igniting sequence, to complete iteration one time;
Step 5, repeating said steps four, remain unchanged until belong to the pixel of described igniting sequence;
Step 6, the pixel that belongs to the sequence of lighting a fire in described PET/CT image to be split is judged to be to belong to tumor region, completes the auto Segmentation of PET/CT tumour target area.
In order further to improve the efficiency of cutting apart of the present invention, as a modification of the present invention, in described step 5, be also provided with maximum iteration time, when the multiplicity of described step 4 exceedes this maximum iteration time, forward described step 6 to.
As one embodiment of the present invention, in described step 1, adopt maximum variance between clusters to remove the background of PET/CT image to be split.
As the preferred embodiment of the present invention, in described step 3, the value of δ is 15.
As one embodiment of the present invention, in described step 3, the value of δ is 19.
As one embodiment of the present invention, in described step 3, the value of δ is 21.
Compared with prior art, the present invention has following beneficial effect:
The present invention obtains optimum igniting threshold values θ by analyzing grey level histogram, and optimize PCNN model by foundation and judge whether the pixel in PET/CT image belongs to tumor region, it is without artificial Selecting All Parameters, avoid choosing and improperly making neuron output be absorbed in local optimum due to parameter, also reduce calculated amount, and needn't be limited to specific application scenarios, be applicable to clinical practice, therefore, the present invention has advantages of that highly versatile, segmentation result are accurate, consuming time few.
Brief description of the drawings
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail:
Fig. 1 is the FB(flow block) of PET/CT tumour of the present invention target area automatic division method;
Fig. 2 is the schematic diagram of the optimization PCNN model in the present invention;
Fig. 3 is the grey level histogram example of PET/CT image to be split in the present invention;
Fig. 4 is the matched curve example of grey level histogram shown in Fig. 3;
Fig. 5 is PET/CT example images to be split;
Fig. 6 is the image that Fig. 5 obtains after automatic division method of the present invention is processed;
Fig. 7 is the schematic diagram that the result of automatic division method acquisition of the present invention is mapped to Fig. 5.
Embodiment
Embodiment mono-
As depicted in figs. 1 and 2, the PET/CT tumour target area automatic division method of the embodiment of the present invention one, comprises the following steps:
Step 1, obtain PET/CT image to be split, adopting maximum variance between clusters is the background that OTSU algorithm is removed PET/CT image to be split, and obtain the grey level histogram (referring to Fig. 3) of removing PET/CT image to be split after background, this grey level histogram includes three crests, is respectively from small to large the tumor region of the high-density organ such as the low-density such as pulmonary parenchyma or blood vessel organ, bone and high brightness by the gray scale of image;
Step 2, grey level histogram is normalized to statistics, and adopt six functions to carry out curve fitting to it, to obtain the matched curve (referring to Fig. 4) with three crests and two troughs, three crests of this matched curve are corresponding one by one with three regions in above-mentioned grey level histogram respectively, two troughs are respectively the high-density organ such as the low-density such as pulmonary parenchyma or blood vessel organ and bone, the boundary gray scale of the high-density organ such as bone and tumor region, get in two troughs of matched curve a corresponding larger gray-scale value as the optimum igniting threshold values θ that divides tumour pixel and non-tumour pixel,
Step 3, a neuron using each pixel in PET/CT image to be split as PCNN model, and feed back input in PCNN model equals outside stimulus, link input is relevant with the output in a moment on neuron in current neuron and neighborhood thereof, internal activity item is defined as the product of feed back input and link input, clearer and more definite with the grey scale change that internal activity item is expressed, can set up thus optimization PCNN model as follows:
F
ij[n]=S
ij;
And δ=15;
U
ij[n]=F
ij[n]*L
ij[n];
Wherein, n represents to optimize the iterations of PCNN model, and it is positive integer; I and j represent respectively the row ordinal sum row ordinal number of pixel in PET/CT image to be split; S
ijfor the gray-scale value of the capable j row of i pixel in PET/CT image to be split; F
ij[n], L
ij[n] and U
ij[n] represents respectively feed back input, link input and the internal activity item of the capable j row of i pixel in the time of the n time iteration in PET/CT image to be split; Y
(i, j)[n-1] represents the pulse output of the capable j row of i pixel in the time of the n-1 time iteration in PET/CT image to be split, Y
(i, j)[n]=1 represents that in PET/CT image to be split, the capable j row of i pixel belongs to igniting sequence, and all pixels of PET/CT image to be split all belong to igniting sequence, i.e. Y at initial time
ij[0]=1; Y
kl[n-1] comprises i capable j row pixel and the pulse output of adjacent eight pixels in the time of the n-1 time iteration thereof, w
ijklfor the weights coefficient of these nine pulse outputs;
Step 4, by PET/CT image to be split, each belongs to the internal activity item U of the pixel of the sequence of lighting a fire
ij[n] and optimum igniting threshold values θ compare, if meet U
ijthe condition of [n] > θ, makes Y
(i, j)[n]=1, the pixel that is about to satisfy condition remains in igniting sequence, otherwise makes Y
(i, j)[n]=0, the pixel that is about to not satisfy condition shifts out igniting sequence, to complete iteration one time;
Step 5, repeating step four, remain unchanged until belong to the pixel of the sequence of lighting a fire, and all pixels of PET/CT image to be split are all moved out of igniting sequence, or no longer include pixel and be moved out of;
Step 6, the pixel that belongs to the sequence of lighting a fire in PET/CT image to be split is judged to be to belong to tumor region, completes the auto Segmentation of PET/CT tumour target area.
Through test, after the PET/CT image to be split shown in Fig. 5 is processed with automatic division method of the present invention, obtain the image shown in Fig. 6, prove that the present invention realizes the validity of PET/CT tumour target area auto Segmentation.Referring to Fig. 7, the result that automatic division method of the present invention is obtained is mapped in Fig. 5, can in PET/CT image to be split, mark tumor region, so that clinical practice.
Embodiment bis-
PET/CT tumour target area automatic division method and the embodiment mono-of the embodiment of the present invention two are basic identical, their difference is: in the step 5 of the present embodiment two, be also provided with maximum iteration time, when the multiplicity of step 4 exceedes this maximum iteration time, forward step 6 to.
Embodiment tri-
The PET/CT tumour target area automatic division method of the embodiment of the present invention three can adopt any one embodiment in embodiment mono-or embodiment bis-, and the difference of the present embodiment three is: in step 3, the value of δ is 13.
Embodiment tetra-
The PET/CT tumour target area automatic division method of the embodiment of the present invention four can adopt any one embodiment in embodiment mono-or embodiment bis-, and the difference of the present embodiment three is: in step 3, the value of δ is 19.
Embodiment five
The PET/CT tumour target area automatic division method of the embodiment of the present invention five can adopt any one embodiment in embodiment mono-or embodiment bis-, and the difference of the present embodiment three is: in step 3, the value of δ is 21.
Embodiment six
The PET/CT tumour target area automatic division method of the embodiment of the present invention six can adopt any one embodiment in embodiment mono-or embodiment bis-, and the difference of the present embodiment three is: in step 3, the value of δ is 24.
The present invention is not limited to above-mentioned embodiment; according to foregoing; according to ordinary skill knowledge and the customary means of this area; do not departing under the above-mentioned basic fundamental thought of the present invention prerequisite; the present invention can also make equivalent modifications, replacement or the change of other various ways, all drops among protection scope of the present invention.
Claims (6)
1. a PET/CT tumour target area automatic division method, comprises the following steps:
Step 1, remove the background of PET/CT image to be split, and obtain the grey level histogram of removing PET/CT image to be split after background;
Step 2, described grey level histogram is normalized to statistics, and adopt six functions to carry out curve fitting to it, there is the matched curve of three crests and two troughs to obtain, get the optimum igniting threshold values θ of a gray-scale value corresponding to institute in described two troughs as division tumour pixel and non-tumour pixel;
Step 3, set up that to optimize PCNN model as follows:
F
ij[n]=S
ij;
13≤δ≤24, and δ is integer;
U
ij[n]=F
ij[n]*L
ij[n];
Wherein, n represents to optimize the iterations of PCNN model, and it is positive integer; I and j represent respectively the row ordinal sum row ordinal number of pixel in described PET/CT image to be split; S
ijfor the gray-scale value of the capable j row of i pixel in PET/CT image to be split; δ is weight parameter; F
ij[n], L
ij[n] and U
ij[n] represents respectively feed back input, link input and the internal activity item of the capable j row of i pixel in the time of the n time iteration in PET/CT image to be split; Y
(i, j)[n-1] represents the pulse output of the capable j row of i pixel in the time of the n-1 time iteration in PET/CT image to be split, Y
(i, j)[n]=1 represents that in PET/CT image to be split, the capable j row of i pixel belongs to igniting sequence, and all pixels of PET/CT image to be split all belong to igniting sequence, i.e. Y at initial time
ij[0]=1;
Step 4, by PET/CT image to be split, each belongs to the internal activity item U of the pixel of the sequence of lighting a fire
ij[n] and described optimum igniting threshold values θ compare, if meet U
ijthe condition of [n] > θ, makes Y
(i, j)[n]=1, the pixel that is about to satisfy condition remains in igniting sequence, otherwise makes Y
(i, j)[n]=0, the pixel that is about to not satisfy condition shifts out igniting sequence, to complete iteration one time;
Step 5, repeating said steps four, remain unchanged until belong to the pixel of described igniting sequence;
Step 6, the pixel that belongs to the sequence of lighting a fire in described PET/CT image to be split is judged to be to belong to tumor region, completes the auto Segmentation of PET/CT tumour target area.
2. PET/CT tumour according to claim 1 target area automatic division method, is characterized in that: in described step 5, be also provided with maximum iteration time, when the multiplicity of described step 4 exceedes this maximum iteration time, forward described step 6 to.
3. PET/CT tumour according to claim 1 and 2 target area automatic division method, is characterized in that: in described step 1, adopt maximum variance between clusters to remove the background of PET/CT image to be split.
4. PET/CT tumour according to claim 1 and 2 target area automatic division method, is characterized in that: in described step 3, the value of δ is 15.
5. PET/CT tumour according to claim 1 and 2 target area automatic division method, is characterized in that: in described step 3, the value of δ is 19.
6. PET/CT tumour according to claim 1 and 2 target area automatic division method, is characterized in that: in described step 3, the value of δ is 21.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410189200.6A CN104008541A (en) | 2014-05-06 | 2014-05-06 | Automatic segmentation method for PET/CT tumor target area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410189200.6A CN104008541A (en) | 2014-05-06 | 2014-05-06 | Automatic segmentation method for PET/CT tumor target area |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104008541A true CN104008541A (en) | 2014-08-27 |
Family
ID=51369181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410189200.6A Pending CN104008541A (en) | 2014-05-06 | 2014-05-06 | Automatic segmentation method for PET/CT tumor target area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104008541A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463840A (en) * | 2014-09-29 | 2015-03-25 | 北京理工大学 | Fever to-be-checked computer aided diagnosis method based on PET/CT images |
CN108665978A (en) * | 2018-04-03 | 2018-10-16 | 首都医科大学附属北京同仁医院 | Analysis method and system for tumour MRI image dynamic contract-enhanced degree |
CN109381161A (en) * | 2017-08-11 | 2019-02-26 | 维布络有限公司 | Cancer discrimination method and device |
CN110322426A (en) * | 2018-03-28 | 2019-10-11 | 北京连心医疗科技有限公司 | Tumor target delineation method, equipment and storage medium based on variable manikin |
CN114022491A (en) * | 2021-10-27 | 2022-02-08 | 安徽医科大学 | Small data set esophageal cancer target area image automatic delineation method based on improved spatial pyramid model |
-
2014
- 2014-05-06 CN CN201410189200.6A patent/CN104008541A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463840A (en) * | 2014-09-29 | 2015-03-25 | 北京理工大学 | Fever to-be-checked computer aided diagnosis method based on PET/CT images |
CN109381161A (en) * | 2017-08-11 | 2019-02-26 | 维布络有限公司 | Cancer discrimination method and device |
CN110322426A (en) * | 2018-03-28 | 2019-10-11 | 北京连心医疗科技有限公司 | Tumor target delineation method, equipment and storage medium based on variable manikin |
CN110322426B (en) * | 2018-03-28 | 2022-05-10 | 北京连心医疗科技有限公司 | Method, device and storage medium for delineating tumor target area based on variable human body model |
CN108665978A (en) * | 2018-04-03 | 2018-10-16 | 首都医科大学附属北京同仁医院 | Analysis method and system for tumour MRI image dynamic contract-enhanced degree |
CN114022491A (en) * | 2021-10-27 | 2022-02-08 | 安徽医科大学 | Small data set esophageal cancer target area image automatic delineation method based on improved spatial pyramid model |
CN114022491B (en) * | 2021-10-27 | 2022-05-10 | 安徽医科大学 | Small data set esophageal cancer target area image automatic delineation method based on improved spatial pyramid model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104008541A (en) | Automatic segmentation method for PET/CT tumor target area | |
van Rooij et al. | Deep learning-based delineation of head and neck organs at risk: geometric and dosimetric evaluation | |
WO2020157761A1 (en) | Automated evaluation of embryo implantation potential | |
CN105389811B (en) | A kind of multi-modality medical image processing method split based on multilevel threshold | |
Yang et al. | Faster R-CNN based microscopic cell detection | |
CN101546430A (en) | Edge extracting method based on simplified pulse coupled neural network | |
CN1822024A (en) | Positioning method for human face characteristic point | |
CN104778711A (en) | Cell counting and positioning method at cleavage stage | |
CN104198355A (en) | Automatic detection method for red cells in feces | |
CN104008528A (en) | Inhomogeneous light field underwater target detection image enhancing method based on threshold segmentation | |
CN105139409B (en) | Two dimensional image dividing method based on ant group algorithm | |
CN109215040A (en) | A kind of tumor of breast dividing method based on multiple dimensioned weighting study | |
CN115409745A (en) | CT image enhancement method applied to radiotherapy preparation | |
CN102509077A (en) | Target identification method based on automatic illumination evaluation | |
CN102542543A (en) | Block similarity-based interactive image segmenting method | |
Tyagi et al. | Performance comparison and analysis of medical image segmentation techniques | |
Natanael et al. | Estimating the distance to an object based on image processing | |
CN104463862B (en) | Method for fast segmenting kidney CT sequential image | |
Chen et al. | Road damage detection and classification using mask R-CNN with DenseNet backbone | |
CN114494160A (en) | Fracture detection method based on complete fusion integrated network candidate frame | |
CN104732534B (en) | Well-marked target takes method and system in a kind of image | |
Fayaz et al. | A robust technique of brain MRI classification using color features and K-nearest neighbors algorithm | |
CN104463130B (en) | A kind of license plate image photo-irradiation treatment method based on assessment reponse system | |
CN102236786A (en) | Light adaptation human skin colour detection method | |
CN103279960B (en) | A kind of image partition method of human body cache based on X-ray backscatter images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140827 |