CN113298754A - Detection method for contour line control points of prostate tissue - Google Patents
Detection method for contour line control points of prostate tissue Download PDFInfo
- Publication number
- CN113298754A CN113298754A CN202110387981.XA CN202110387981A CN113298754A CN 113298754 A CN113298754 A CN 113298754A CN 202110387981 A CN202110387981 A CN 202110387981A CN 113298754 A CN113298754 A CN 113298754A
- Authority
- CN
- China
- Prior art keywords
- prostate tissue
- contour line
- points
- control points
- tissue contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000002307 prostate Anatomy 0.000 title claims abstract description 78
- 238000001514 detection method Methods 0.000 title description 6
- 238000005481 NMR spectroscopy Methods 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims abstract description 18
- 238000002372 labelling Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 description 7
- 230000002708 enhancing effect Effects 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 206010060862 Prostate cancer Diseases 0.000 description 2
- 208000000236 Prostatic Neoplasms Diseases 0.000 description 2
- 230000019771 cognition Effects 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013421 nuclear magnetic resonance imaging Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 239000000427 antigen Substances 0.000 description 1
- 102000036639 antigens Human genes 0.000 description 1
- 108091007433 antigens Proteins 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 208000026106 cerebrovascular disease Diseases 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000005865 ionizing radiation Effects 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 238000013188 needle biopsy Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 208000017497 prostate disease Diseases 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30081—Prostate
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention discloses a method for detecting control points of a prostate tissue contour line, which comprises the following steps: marking the prostate tissue contour lines of a plurality of original nuclear magnetic resonance images to obtain a plurality of prostate tissue contour lines, and extracting the pixel coordinates of each prostate tissue contour line; selecting characteristic points in each prostate tissue contour line pixel coordinate, and generating a corresponding heat map to obtain a data set; constructing a U-Net network, and initializing training parameters; inputting the data set into a U-Net network for training to obtain a network model; inputting the original nuclear magnetic resonance image into a network model to predict to obtain predicted characteristic points, and connecting the predicted characteristic points to obtain a prostate tissue contour line. By adopting a custom loss function, the prediction error of the control point can be effectively reduced; meanwhile, by an interpretable training mode, characteristic information of the control points is continuously learned, the efficiency of predicting the control points by the model is improved, and finally, the contour line of the prostate tissue in the nuclear magnetic resonance image is efficiently and automatically obtained.
Description
Technical Field
The invention belongs to the technical field of image processing, and relates to a method for detecting control points of a contour line of a prostate tissue.
Background
In recent years, clinical prostate cancer detection methods include specific antigen detection, prostate B-mode ultrasound, needle biopsy, MR examination, and the like, wherein MR examination is also called magnetic resonance examination, and magnetic resonance imaging is the most effective prostate cancer image diagnosis method, and can assist doctors in improving the efficiency of judging prostate diseases. The nuclear magnetic resonance imaging has the imaging characteristics of no ionizing radiation, high soft tissue resolution, multi-parameter, multi-azimuth and multi-sequence, and is widely applied to clinical diagnosis of tumors, heart diseases, cerebrovascular diseases and the like.
With the rapid development of deep learning technology, neural networks are widely applied to medical image processing, such as inclusion-v 3 network for judging pathological section tumors, U-NET network for obtaining prostate position in CT (computed tomography) images, deep learning for calculating heart volume, and the like. Because the doctor observes the nuclear magnetic resonance image and knows the prostate condition of the patient, the doctor relies heavily on the experience of the imaging doctor and is affected by emotion and cognition. The misjudgment is caused by the influence of factors such as prejudice, and the like, and the labeling of the contour line of the prostate tissue still needs to be solved urgently. The control points of the prostate contour line cannot be labeled on a nuclear magnetic resonance imaging (DWI) based on the traditional image processing technology such as a Scale-invariant feature transform (SIFT) method, and the current main contour line labeling work is manual labeling, so that the efficiency is low, and time and labor are wasted.
Disclosure of Invention
The invention aims to provide a method for detecting contour line control points of prostate tissues, which solves the problem of low manual labeling efficiency in the prior art.
The invention adopts the technical scheme that the method for detecting the control points of the contour line of the prostate tissue comprises the following steps:
step 1, labeling prostate tissue contour lines of a plurality of original nuclear magnetic resonance images to obtain a plurality of prostate tissue contour lines, and extracting pixel coordinates of each prostate tissue contour line;
step 2, selecting characteristic points in each prostate tissue contour line pixel coordinate, and generating a corresponding heat map to obtain a data set;
step 3, constructing a U-Net network, and initializing training parameters;
step 4, inputting the data set into a U-Net network for training to obtain a network model;
and 5, inputting the original nuclear magnetic resonance image into a network model to predict to obtain predicted characteristic points, and connecting the predicted characteristic points to obtain a prostate tissue contour line.
The invention is also characterized in that:
the step 2 specifically comprises the following steps:
step 2.1, counting the number of pixel coordinates of each prostate tissue contour line, selecting a plurality of pixel coordinates as prostate contour line control points according to a proportion, wherein the prostate contour line control points are averagely divided into upper half control points and lower half control points;
step 2.2, obtaining the maximum value max _ x and the minimum value min _ x of the x-axis direction in a plurality of pixel coordinates, and calculating the distance of each control point of the x-axis according to the number of the control points of the upper half part, the maximum value max _ x and the minimum value min _ x;
step 2.3, traversing the minimum value min _ x to the maximum value max _ x, and calculating the coordinate y value corresponding to the pixel coordinate x value at intervals of one interval to obtain the control points (xi, y) of the two prostate tissue contour linesupi)、(xi,ydowni) Until the traversal is finished, forming a first feature point;
step 2.4, obtaining an x-axis central point coordinate x _ Center and a y-axis central point coordinate y _ Center according to the minimum value min _ x and the maximum value max _ x to form a second characteristic point;
step 2.5, forming the first characteristic points and the second characteristic points into characteristic points serving as initial point coordinates of the network training data set; meanwhile, the characteristic points of the prostate tissue contour line of each nuclear magnetic resonance image are used for making a heat image, and the background pixel intensity and the characteristic point pixel intensity of the heat image are set to obtain an initial data set.
Before step 3, the data set is normalized and data is enhanced.
The U-Net network in the step 3 adopts a custom loss function, and the custom loss function is as follows:
LOSS=λ1*LOSS1+λ2*LOSS2 (2);
in the above equation, ω is a penalty term, and abs () is an absolute value.
The step 4 specifically comprises the following steps:
step 4.1, inputting the data set into a U-Net network, taking the labeled nuclear magnetic resonance image as an input picture and the heat image as a label, and performing multiple training to obtain a first network model;
step 4.2, inputting the original nuclear magnetic resonance image into the first network model for prediction to obtain predicted feature points, marking the predicted feature points on the original nuclear magnetic resonance image, and manually adjusting the predicted feature points to obtain a second data set;
4.3, inputting the second data set into the first network model for training to obtain a second network model;
and 4.4, repeating the steps 4.2-4.3 until the model tends to converge to obtain the final network model.
The step 5 specifically comprises the following steps: inputting the original nuclear magnetic resonance image into a network model for prediction to obtain predicted characteristic points, and connecting the predicted characteristic points of the outermost contour to obtain a prostate tissue contour line.
The invention has the beneficial effects that:
the detection method of the prostate tissue contour line control points adopts the user-defined loss function, and can effectively reduce the prediction error of the control points; meanwhile, the characteristic information of the control points is continuously learned through an interpretable training mode, the efficiency of predicting the control points by the model is improved, the contour line of the prostate tissue in the nuclear magnetic resonance image is finally obtained efficiently and automatically, and the contour line is provided for doctors to judge the illness state of the patients in time.
Drawings
FIG. 1 is a flow chart of a method of detecting control points of a prostate tissue contour line according to the present invention;
FIG. 2 is a labeled nuclear magnetic resonance DWI prostate tissue map of a method of detecting prostate tissue contour line control points according to the present invention;
FIG. 3 is a nuclear magnetic resonance DWI prostate tissue map labeled with a method of detecting prostate tissue contour line control points according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
A method for detecting control points of a contour line of prostate tissue, as shown in fig. 1, comprising the following steps:
step 1, a doctor manually performs color labeling on a plurality of original nuclear magnetic resonance image prostate tissue contour lines to obtain a plurality of prostate tissue contour lines, each original nuclear magnetic resonance image comprises a prostate tissue contour line, and color pixel coordinates of each prostate tissue contour line are extracted through codes;
step 2, selecting characteristic points in each prostate tissue contour line pixel coordinate, and generating a corresponding heat map to obtain a data set;
step 2.1, counting the number of color _ num of pixel coordinates of each prostate tissue contour line, selecting a plurality of pixel coordinates as prostate contour line control points according to a proportion, wherein the prostate contour line comprises an upper half part and a lower half part, so that the prostate contour line control points comprise upper half part control points and lower half part control points, and the number of the upper half part control points point _ num _ up is the same as that of the lower half part control points;
step 2.2, obtaining the maximum value max _ x and the minimum value min _ x of the x-axis direction in a plurality of pixel coordinates, and calculating the distance between each control point of the x-axis according to the number of the control points of the upper half part, the maximum value max _ x and the minimum value min _ x, wherein the formula is as follows:
distance=(max_x-min_x)/point_num_up (1);
step 2.3, traversing the minimum value min _ x to the maximum value max _ x, and calculating the coordinate y value corresponding to the pixel coordinate x value at intervals of one interval to obtain the control points (xi, y) of the two prostate tissue contour linesupi)、(xi,ydowni) Until the traversal is finished, obtaining a first feature point;
step 2.4, obtaining an x-axis central point coordinate x _ Center according to the minimum value min _ x and the maximum value max _ x, obtaining a y-axis central point coordinate y _ Center of the prostate tissue in the same way, wherein the x _ Center and the y _ Center form a second characteristic point;
step 2.5, forming the feature points of the prostate contour by the first feature points and the second feature points, and using the feature points as initial point coordinates of a network training data set; meanwhile, the characteristic points of the prostate tissue contour line of each nuclear magnetic resonance image are used for making a heat image, and the background pixel intensity and the characteristic point pixel intensity of the heat image are set to obtain an initial data set.
Step 3, normalizing and enhancing the data set to improve the network training effect; building a U-Net network, initializing training parameters, and adopting a custom loss function, wherein the custom loss function is as follows:
LOSS=λ1*LOSS1+λ2*LOSS2 (2);
in the above equation, ω is a penalty term, which makes Loss1 not too large, and abs () is an absolute value.
loss2 divides the two heat maps into a number of 11 × 11 windows according to the sliding window, calculates the Euclidean distance between the point of the data set in each sliding window and the predicted point in the window, and adds up the Euclidean distances as loss2, so as to reduce the distance between the predicted point and the point of the data set and optimize the effect of loss1, and the formula is as follows:
step 4, inputting the data set into a U-Net network for training to obtain a network model;
step 4.1, inputting the data set into a U-Net network, taking the labeled nuclear magnetic resonance image as an input picture and the heat image as a label, and performing multiple training to obtain a first network model;
and 4.2, in the interaction process of the human and the deep learning model, hopefully coordinating contradiction or inconsistency between self cognition and network training. Inputting the original nuclear magnetic resonance image into a first network model for prediction to obtain predicted characteristic points, marking the predicted characteristic points on the original nuclear magnetic resonance image, and manually adjusting the predicted characteristic points to enable the predicted characteristic points to better accord with the rule observed by human eyes to obtain a second data set;
4.3, inputting the second data set into the first network model for training for multiple times, so that the model continuously learns the characteristics of the characteristic points in the heat map to obtain a second network model;
and 4.4, repeating the steps 4.2-4.3 to enable the model to tend to converge, and obtaining the final network model.
And 5, inputting the original nuclear magnetic resonance image into a network model for prediction to obtain predicted characteristic points, and connecting the predicted characteristic points of the outermost contour to obtain a prostate tissue contour line.
Through the mode, the detection method of the prostate tissue contour line control points adopts the user-defined loss function, so that the prediction error of the control points can be effectively reduced; meanwhile, the characteristic information of the control points is continuously learned through an interpretable training mode, the efficiency of predicting the control points by the model is improved, the contour line of the prostate tissue in the nuclear magnetic resonance image is finally obtained efficiently and automatically, and the contour line is provided for doctors to judge the illness state of the patients in time.
Examples
Step 1, manually marking the prostate tissue contour lines of a plurality of original nuclear magnetic resonance images by a doctor, obtaining a plurality of prostate tissue contour lines as shown in fig. 2, separately extracting RGB three channels of an image by compiling codes, extracting red pixel coordinates, wherein an R channel is 255, and GB two channels are 0, and obtaining color pixel coordinates of the prostate tissue contour lines;
step 2, counting the number color _ num of each prostate tissue contour line pixel coordinate to be 140, selecting 5% of the number of the pixel coordinates as prostate contour line control points, and taking the number of the pixel coordinates as the number of the prostate contour line control points to be 28; the number of the control points in the upper half part, point _ num _ up, and the number of the control points in the lower half part are both 14;
acquiring a maximum value max _ x and a minimum value min _ x in the x-axis direction of a plurality of pixel coordinates as 145 and 100, and calculating the distance between each control point on the x-axis according to the number of the control points on the upper half part, the maximum value max _ x and the minimum value min _ x, wherein the distance is (145-100)/14-3;
traversing min _ x to max _ x to 145, calculating a coordinate y value corresponding to the pixel coordinate x value at intervals of 3 pixels, and obtaining control points (100, 25) and (100, 32) of two prostate tissue contour lines until the traversal is finished to obtain a first characteristic point;
obtaining an x-axis Center coordinate x _ Center-122 according to the minimum value min _ x and the maximum value max _ x, and obtaining a y-axis Center coordinate y _ Center-28 of the prostate tissue in the same way to form a second feature point (122, 28);
forming the characteristic points of the prostate contour by the first characteristic points and the second characteristic points, and using the characteristic points as initial point coordinates of a network training data set; meanwhile, the characteristic points of the prostate tissue contour line of each nuclear magnetic resonance image are used for making a heat image, the background pixel intensity of the heat image is set to be 0, and the characteristic point pixel intensity is set to be 200, so that an initial data set is obtained.
Step 3, normalizing and enhancing the data set, wherein because the number of images in the data set is small, the method of enhancing the data is adopted, each pair of the labeled nuclear magnetic resonance images and the thermal images which are trained is subjected to horizontal overturning, vertical overturning and horizontal and vertical overturning, and the nuclear magnetic resonance images and the thermal images and the original data are used as a 4-time data set for network training; building a U-Net network, initializing training parameters, training 12 epochs by the network, learning by adopting Adam by an optimizerThe rate is set to 0.0001 and a custom loss function is employed, the lambda of which is1Is set to 0.5, lambda21.5, p is set to 2.5, ω is 0.0001;
step 4, inputting the data set into a U-Net network, taking the labeled nuclear magnetic resonance image as an input image and the heat image as a label, and performing multiple training to obtain a first network model;
inputting the original nuclear magnetic resonance image into a first network model for prediction to obtain predicted feature points, marking the predicted feature points on the original nuclear magnetic resonance image, and manually adjusting the predicted feature points to obtain a second data set;
inputting the second data set into the first network model to train for 12 epochs, and enabling the model to continuously learn the characteristics of the characteristic points in the heat map to obtain a second network model;
inputting the second data set into a second network model for prediction to obtain predicted feature points, marking the predicted feature points on the original nuclear magnetic resonance image, and manually adjusting the predicted feature points to obtain a third data set;
and inputting the third data set into a second network model for training, so that the model tends to converge, and a final network model is obtained. After the two times of adjustment, the characteristic information of the control point is continuously learned, and the capability of the model for predicting the control point is improved.
And 5, inputting the original nuclear magnetic resonance image into a network model for prediction to obtain predicted characteristic points, and connecting the predicted characteristic points of the outermost contour to obtain a prostate tissue contour line, as shown in fig. 3.
Claims (6)
1. A method for detecting control points of a prostate tissue contour line is characterized by comprising the following steps:
step 1, labeling prostate tissue contour lines of a plurality of original nuclear magnetic resonance images to obtain a plurality of prostate tissue contour lines, and extracting pixel coordinates of each prostate tissue contour line;
step 2, selecting characteristic points in each pixel coordinate of the prostate tissue contour line, and generating a corresponding heat map to obtain a data set;
step 3, constructing a U-Net network, and initializing training parameters;
step 4, inputting the data set into a U-Net network for training to obtain a network model;
and 5, inputting the original nuclear magnetic resonance image into a network model to predict to obtain predicted characteristic points, and connecting the predicted characteristic points to obtain a prostate tissue contour line.
2. The method for detecting the control points of the prostate tissue contour line according to claim 1, wherein the step 2 comprises the following steps:
step 2.1, counting the number of pixel coordinates of each prostate tissue contour line, selecting a plurality of pixel coordinates as prostate contour line control points according to a proportion, wherein the prostate contour line control points are averagely divided into upper half control points and lower half control points;
step 2.2, obtaining a maximum value max _ x and a minimum value min _ x of the x-axis direction in a plurality of pixel coordinates, and calculating the distance of each control point of the x-axis according to the number of the control points of the upper half part, the maximum value max _ x and the minimum value min _ x;
step 2.3, traversing the minimum value min _ x to the maximum value max _ x, and calculating the coordinate y value corresponding to the pixel coordinate x value at intervals of one interval to obtain the control points (xi, y) of the two prostate tissue contour linesupi)、(xi,ydowni) Until the traversal is finished, forming a first feature point;
step 2.4, obtaining an x-axis central point coordinate x _ Center and a y-axis central point coordinate y _ Center according to the minimum value min _ x and the maximum value max _ x to form a second characteristic point;
step 2.5, forming the first characteristic points and the second characteristic points into characteristic points which serve as initial point coordinates of a network training data set; meanwhile, the characteristic points of the prostate tissue contour line of each nuclear magnetic resonance image are used for making a heat image, and the background pixel intensity and the characteristic point pixel intensity of the heat image are set to obtain an initial data set.
3. The method of claim 1, wherein said data set is normalized and data enhanced prior to step 3.
5. The method for detecting the control points of the prostate tissue contour line according to claim 1, wherein the step 4 comprises the following steps:
step 4.1, inputting the data set into a U-Net network, taking the labeled nuclear magnetic resonance image as an input picture and the heat image as a label, and performing multiple training to obtain a first network model;
step 4.2, inputting the original nuclear magnetic resonance image into a first network model for prediction to obtain predicted feature points, marking the predicted feature points on the original nuclear magnetic resonance image, and manually adjusting the predicted feature points to obtain a second data set;
4.3, inputting the second data set into a first network model for training to obtain a second network model;
and 4.4, repeating the steps 4.2-4.3 until the model tends to converge to obtain the final network model.
6. The method for detecting the control points of the prostate tissue contour line according to claim 1, wherein the step 5 is specifically as follows: inputting the original nuclear magnetic resonance image into a network model for prediction to obtain predicted characteristic points, and connecting the predicted characteristic points of the outermost contour to obtain a prostate tissue contour line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110387981.XA CN113298754B (en) | 2021-04-12 | 2021-04-12 | Method for detecting control points of outline of prostate tissue |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110387981.XA CN113298754B (en) | 2021-04-12 | 2021-04-12 | Method for detecting control points of outline of prostate tissue |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113298754A true CN113298754A (en) | 2021-08-24 |
CN113298754B CN113298754B (en) | 2024-02-06 |
Family
ID=77319589
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110387981.XA Active CN113298754B (en) | 2021-04-12 | 2021-04-12 | Method for detecting control points of outline of prostate tissue |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298754B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310627A (en) * | 2023-01-16 | 2023-06-23 | 北京医准智能科技有限公司 | Model training method, contour prediction device, electronic equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3053487A1 (en) * | 2017-02-22 | 2018-08-30 | The United States Of America, As Represented By The Secretary, Department Of Health And Human Services | Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks |
CN109636806A (en) * | 2018-11-22 | 2019-04-16 | 浙江大学山东工业技术研究院 | A kind of three-dimensional NMR pancreas image partition method based on multistep study |
CN109919216A (en) * | 2019-02-28 | 2019-06-21 | 合肥工业大学 | A kind of confrontation learning method for computer-aided diagnosis prostate cancer |
CN110008992A (en) * | 2019-02-28 | 2019-07-12 | 合肥工业大学 | A kind of deep learning method for prostate cancer auxiliary diagnosis |
US20200020082A1 (en) * | 2018-07-10 | 2020-01-16 | The Board Of Trustees Of The Leland Stanford Junior University | Un-supervised convolutional neural network for distortion map estimation and correction in MRI |
CN111754472A (en) * | 2020-06-15 | 2020-10-09 | 南京冠纬健康科技有限公司 | Pulmonary nodule detection method and system |
-
2021
- 2021-04-12 CN CN202110387981.XA patent/CN113298754B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3053487A1 (en) * | 2017-02-22 | 2018-08-30 | The United States Of America, As Represented By The Secretary, Department Of Health And Human Services | Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks |
US20200020082A1 (en) * | 2018-07-10 | 2020-01-16 | The Board Of Trustees Of The Leland Stanford Junior University | Un-supervised convolutional neural network for distortion map estimation and correction in MRI |
CN109636806A (en) * | 2018-11-22 | 2019-04-16 | 浙江大学山东工业技术研究院 | A kind of three-dimensional NMR pancreas image partition method based on multistep study |
CN109919216A (en) * | 2019-02-28 | 2019-06-21 | 合肥工业大学 | A kind of confrontation learning method for computer-aided diagnosis prostate cancer |
CN110008992A (en) * | 2019-02-28 | 2019-07-12 | 合肥工业大学 | A kind of deep learning method for prostate cancer auxiliary diagnosis |
CN111754472A (en) * | 2020-06-15 | 2020-10-09 | 南京冠纬健康科技有限公司 | Pulmonary nodule detection method and system |
Non-Patent Citations (3)
Title |
---|
凌彤;杨琬琪;杨明;: "利用多模态U形网络的CT图像前列腺分割", 智能系统学报, no. 06 * |
刘云鹏;刘光品;王仁芳;金冉;孙德超;邱虹;董晨;李瑾;洪国斌;: "深度学习结合影像组学的肝脏肿瘤CT分割", 中国图象图形学报, no. 10 * |
詹曙;梁植程;谢栋栋;: "前列腺磁共振图像分割的反卷积神经网络方法", 中国图象图形学报, no. 04 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310627A (en) * | 2023-01-16 | 2023-06-23 | 北京医准智能科技有限公司 | Model training method, contour prediction device, electronic equipment and medium |
CN116310627B (en) * | 2023-01-16 | 2024-02-02 | 浙江医准智能科技有限公司 | Model training method, contour prediction device, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN113298754B (en) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11645748B2 (en) | Three-dimensional automatic location system for epileptogenic focus based on deep learning | |
JP6947759B2 (en) | Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects | |
Kim et al. | Enhanced airway-tissue boundary segmentation for real-time magnetic resonance imaging data | |
El-Baz et al. | Appearance analysis for diagnosing malignant lung nodules | |
CN109064443B (en) | Multi-model organ segmentation method based on abdominal ultrasonic image | |
CN109035160A (en) | The fusion method of medical image and the image detecting method learnt based on fusion medical image | |
CN110930416A (en) | MRI image prostate segmentation method based on U-shaped network | |
Huang et al. | Channel-attention U-Net: Channel attention mechanism for semantic segmentation of esophagus and esophageal cancer | |
WO2022121100A1 (en) | Darts network-based multi-modal medical image fusion method | |
CN110503626B (en) | CT image modality alignment method based on space-semantic significance constraint | |
Wu et al. | Registration of longitudinal brain image sequences with implicit template and spatial–temporal heuristics | |
CN113420826A (en) | Liver focus image processing system and image processing method | |
Liu et al. | Multimodal MRI brain tumor image segmentation using sparse subspace clustering algorithm | |
CN106056589A (en) | Ultrasound contrast perfusion parameter imaging method based on respiratory motion compensation | |
Cui et al. | Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation | |
Zhang et al. | Brain atlas fusion from high-thickness diagnostic magnetic resonance images by learning-based super-resolution | |
CN116258732A (en) | Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images | |
CN113298830A (en) | Acute intracranial ICH region image segmentation method based on self-supervision | |
CN113298754B (en) | Method for detecting control points of outline of prostate tissue | |
CN109003280A (en) | Inner membrance dividing method in a kind of blood vessel of binary channels intravascular ultrasound image | |
CN103236062B (en) | Based on the magnetic resonance image (MRI) blood vessel segmentation system in human brain tumour's nuclear-magnetism storehouse | |
CN114332910A (en) | Human body part segmentation method for similar feature calculation of far infrared image | |
CN114581499A (en) | Multi-modal medical image registration method combining intelligent agent and attention mechanism | |
CN102663728B (en) | Dictionary learning-based medical image interactive joint segmentation | |
CN113222979A (en) | Multi-map-based automatic skull base foramen ovale segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |