CN113298754A - Detection method for contour line control points of prostate tissue - Google Patents

Detection method for contour line control points of prostate tissue Download PDF

Info

Publication number
CN113298754A
CN113298754A CN202110387981.XA CN202110387981A CN113298754A CN 113298754 A CN113298754 A CN 113298754A CN 202110387981 A CN202110387981 A CN 202110387981A CN 113298754 A CN113298754 A CN 113298754A
Authority
CN
China
Prior art keywords
prostate tissue
contour line
points
control points
tissue contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110387981.XA
Other languages
Chinese (zh)
Other versions
CN113298754B (en
Inventor
金海燕
王海鹏
肖照林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202110387981.XA priority Critical patent/CN113298754B/en
Publication of CN113298754A publication Critical patent/CN113298754A/en
Application granted granted Critical
Publication of CN113298754B publication Critical patent/CN113298754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a method for detecting control points of a prostate tissue contour line, which comprises the following steps: marking the prostate tissue contour lines of a plurality of original nuclear magnetic resonance images to obtain a plurality of prostate tissue contour lines, and extracting the pixel coordinates of each prostate tissue contour line; selecting characteristic points in each prostate tissue contour line pixel coordinate, and generating a corresponding heat map to obtain a data set; constructing a U-Net network, and initializing training parameters; inputting the data set into a U-Net network for training to obtain a network model; inputting the original nuclear magnetic resonance image into a network model to predict to obtain predicted characteristic points, and connecting the predicted characteristic points to obtain a prostate tissue contour line. By adopting a custom loss function, the prediction error of the control point can be effectively reduced; meanwhile, by an interpretable training mode, characteristic information of the control points is continuously learned, the efficiency of predicting the control points by the model is improved, and finally, the contour line of the prostate tissue in the nuclear magnetic resonance image is efficiently and automatically obtained.

Description

Detection method for contour line control points of prostate tissue
Technical Field
The invention belongs to the technical field of image processing, and relates to a method for detecting control points of a contour line of a prostate tissue.
Background
In recent years, clinical prostate cancer detection methods include specific antigen detection, prostate B-mode ultrasound, needle biopsy, MR examination, and the like, wherein MR examination is also called magnetic resonance examination, and magnetic resonance imaging is the most effective prostate cancer image diagnosis method, and can assist doctors in improving the efficiency of judging prostate diseases. The nuclear magnetic resonance imaging has the imaging characteristics of no ionizing radiation, high soft tissue resolution, multi-parameter, multi-azimuth and multi-sequence, and is widely applied to clinical diagnosis of tumors, heart diseases, cerebrovascular diseases and the like.
With the rapid development of deep learning technology, neural networks are widely applied to medical image processing, such as inclusion-v 3 network for judging pathological section tumors, U-NET network for obtaining prostate position in CT (computed tomography) images, deep learning for calculating heart volume, and the like. Because the doctor observes the nuclear magnetic resonance image and knows the prostate condition of the patient, the doctor relies heavily on the experience of the imaging doctor and is affected by emotion and cognition. The misjudgment is caused by the influence of factors such as prejudice, and the like, and the labeling of the contour line of the prostate tissue still needs to be solved urgently. The control points of the prostate contour line cannot be labeled on a nuclear magnetic resonance imaging (DWI) based on the traditional image processing technology such as a Scale-invariant feature transform (SIFT) method, and the current main contour line labeling work is manual labeling, so that the efficiency is low, and time and labor are wasted.
Disclosure of Invention
The invention aims to provide a method for detecting contour line control points of prostate tissues, which solves the problem of low manual labeling efficiency in the prior art.
The invention adopts the technical scheme that the method for detecting the control points of the contour line of the prostate tissue comprises the following steps:
step 1, labeling prostate tissue contour lines of a plurality of original nuclear magnetic resonance images to obtain a plurality of prostate tissue contour lines, and extracting pixel coordinates of each prostate tissue contour line;
step 2, selecting characteristic points in each prostate tissue contour line pixel coordinate, and generating a corresponding heat map to obtain a data set;
step 3, constructing a U-Net network, and initializing training parameters;
step 4, inputting the data set into a U-Net network for training to obtain a network model;
and 5, inputting the original nuclear magnetic resonance image into a network model to predict to obtain predicted characteristic points, and connecting the predicted characteristic points to obtain a prostate tissue contour line.
The invention is also characterized in that:
the step 2 specifically comprises the following steps:
step 2.1, counting the number of pixel coordinates of each prostate tissue contour line, selecting a plurality of pixel coordinates as prostate contour line control points according to a proportion, wherein the prostate contour line control points are averagely divided into upper half control points and lower half control points;
step 2.2, obtaining the maximum value max _ x and the minimum value min _ x of the x-axis direction in a plurality of pixel coordinates, and calculating the distance of each control point of the x-axis according to the number of the control points of the upper half part, the maximum value max _ x and the minimum value min _ x;
step 2.3, traversing the minimum value min _ x to the maximum value max _ x, and calculating the coordinate y value corresponding to the pixel coordinate x value at intervals of one interval to obtain the control points (xi, y) of the two prostate tissue contour linesupi)、(xi,ydowni) Until the traversal is finished, forming a first feature point;
step 2.4, obtaining an x-axis central point coordinate x _ Center and a y-axis central point coordinate y _ Center according to the minimum value min _ x and the maximum value max _ x to form a second characteristic point;
step 2.5, forming the first characteristic points and the second characteristic points into characteristic points serving as initial point coordinates of the network training data set; meanwhile, the characteristic points of the prostate tissue contour line of each nuclear magnetic resonance image are used for making a heat image, and the background pixel intensity and the characteristic point pixel intensity of the heat image are set to obtain an initial data set.
Before step 3, the data set is normalized and data is enhanced.
The U-Net network in the step 3 adopts a custom loss function, and the custom loss function is as follows:
LOSS=λ1*LOSS12*LOSS2 (2);
Figure BDA0003015759620000031
Figure BDA0003015759620000032
in the above equation, ω is a penalty term, and abs () is an absolute value.
The step 4 specifically comprises the following steps:
step 4.1, inputting the data set into a U-Net network, taking the labeled nuclear magnetic resonance image as an input picture and the heat image as a label, and performing multiple training to obtain a first network model;
step 4.2, inputting the original nuclear magnetic resonance image into the first network model for prediction to obtain predicted feature points, marking the predicted feature points on the original nuclear magnetic resonance image, and manually adjusting the predicted feature points to obtain a second data set;
4.3, inputting the second data set into the first network model for training to obtain a second network model;
and 4.4, repeating the steps 4.2-4.3 until the model tends to converge to obtain the final network model.
The step 5 specifically comprises the following steps: inputting the original nuclear magnetic resonance image into a network model for prediction to obtain predicted characteristic points, and connecting the predicted characteristic points of the outermost contour to obtain a prostate tissue contour line.
The invention has the beneficial effects that:
the detection method of the prostate tissue contour line control points adopts the user-defined loss function, and can effectively reduce the prediction error of the control points; meanwhile, the characteristic information of the control points is continuously learned through an interpretable training mode, the efficiency of predicting the control points by the model is improved, the contour line of the prostate tissue in the nuclear magnetic resonance image is finally obtained efficiently and automatically, and the contour line is provided for doctors to judge the illness state of the patients in time.
Drawings
FIG. 1 is a flow chart of a method of detecting control points of a prostate tissue contour line according to the present invention;
FIG. 2 is a labeled nuclear magnetic resonance DWI prostate tissue map of a method of detecting prostate tissue contour line control points according to the present invention;
FIG. 3 is a nuclear magnetic resonance DWI prostate tissue map labeled with a method of detecting prostate tissue contour line control points according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
A method for detecting control points of a contour line of prostate tissue, as shown in fig. 1, comprising the following steps:
step 1, a doctor manually performs color labeling on a plurality of original nuclear magnetic resonance image prostate tissue contour lines to obtain a plurality of prostate tissue contour lines, each original nuclear magnetic resonance image comprises a prostate tissue contour line, and color pixel coordinates of each prostate tissue contour line are extracted through codes;
step 2, selecting characteristic points in each prostate tissue contour line pixel coordinate, and generating a corresponding heat map to obtain a data set;
step 2.1, counting the number of color _ num of pixel coordinates of each prostate tissue contour line, selecting a plurality of pixel coordinates as prostate contour line control points according to a proportion, wherein the prostate contour line comprises an upper half part and a lower half part, so that the prostate contour line control points comprise upper half part control points and lower half part control points, and the number of the upper half part control points point _ num _ up is the same as that of the lower half part control points;
step 2.2, obtaining the maximum value max _ x and the minimum value min _ x of the x-axis direction in a plurality of pixel coordinates, and calculating the distance between each control point of the x-axis according to the number of the control points of the upper half part, the maximum value max _ x and the minimum value min _ x, wherein the formula is as follows:
distance=(max_x-min_x)/point_num_up (1);
step 2.3, traversing the minimum value min _ x to the maximum value max _ x, and calculating the coordinate y value corresponding to the pixel coordinate x value at intervals of one interval to obtain the control points (xi, y) of the two prostate tissue contour linesupi)、(xi,ydowni) Until the traversal is finished, obtaining a first feature point;
step 2.4, obtaining an x-axis central point coordinate x _ Center according to the minimum value min _ x and the maximum value max _ x, obtaining a y-axis central point coordinate y _ Center of the prostate tissue in the same way, wherein the x _ Center and the y _ Center form a second characteristic point;
step 2.5, forming the feature points of the prostate contour by the first feature points and the second feature points, and using the feature points as initial point coordinates of a network training data set; meanwhile, the characteristic points of the prostate tissue contour line of each nuclear magnetic resonance image are used for making a heat image, and the background pixel intensity and the characteristic point pixel intensity of the heat image are set to obtain an initial data set.
Step 3, normalizing and enhancing the data set to improve the network training effect; building a U-Net network, initializing training parameters, and adopting a custom loss function, wherein the custom loss function is as follows:
LOSS=λ1*LOSS12*LOSS2 (2);
Figure BDA0003015759620000051
in the above equation, ω is a penalty term, which makes Loss1 not too large, and abs () is an absolute value.
loss2 divides the two heat maps into a number of 11 × 11 windows according to the sliding window, calculates the Euclidean distance between the point of the data set in each sliding window and the predicted point in the window, and adds up the Euclidean distances as loss2, so as to reduce the distance between the predicted point and the point of the data set and optimize the effect of loss1, and the formula is as follows:
Figure BDA0003015759620000061
step 4, inputting the data set into a U-Net network for training to obtain a network model;
step 4.1, inputting the data set into a U-Net network, taking the labeled nuclear magnetic resonance image as an input picture and the heat image as a label, and performing multiple training to obtain a first network model;
and 4.2, in the interaction process of the human and the deep learning model, hopefully coordinating contradiction or inconsistency between self cognition and network training. Inputting the original nuclear magnetic resonance image into a first network model for prediction to obtain predicted characteristic points, marking the predicted characteristic points on the original nuclear magnetic resonance image, and manually adjusting the predicted characteristic points to enable the predicted characteristic points to better accord with the rule observed by human eyes to obtain a second data set;
4.3, inputting the second data set into the first network model for training for multiple times, so that the model continuously learns the characteristics of the characteristic points in the heat map to obtain a second network model;
and 4.4, repeating the steps 4.2-4.3 to enable the model to tend to converge, and obtaining the final network model.
And 5, inputting the original nuclear magnetic resonance image into a network model for prediction to obtain predicted characteristic points, and connecting the predicted characteristic points of the outermost contour to obtain a prostate tissue contour line.
Through the mode, the detection method of the prostate tissue contour line control points adopts the user-defined loss function, so that the prediction error of the control points can be effectively reduced; meanwhile, the characteristic information of the control points is continuously learned through an interpretable training mode, the efficiency of predicting the control points by the model is improved, the contour line of the prostate tissue in the nuclear magnetic resonance image is finally obtained efficiently and automatically, and the contour line is provided for doctors to judge the illness state of the patients in time.
Examples
Step 1, manually marking the prostate tissue contour lines of a plurality of original nuclear magnetic resonance images by a doctor, obtaining a plurality of prostate tissue contour lines as shown in fig. 2, separately extracting RGB three channels of an image by compiling codes, extracting red pixel coordinates, wherein an R channel is 255, and GB two channels are 0, and obtaining color pixel coordinates of the prostate tissue contour lines;
step 2, counting the number color _ num of each prostate tissue contour line pixel coordinate to be 140, selecting 5% of the number of the pixel coordinates as prostate contour line control points, and taking the number of the pixel coordinates as the number of the prostate contour line control points to be 28; the number of the control points in the upper half part, point _ num _ up, and the number of the control points in the lower half part are both 14;
acquiring a maximum value max _ x and a minimum value min _ x in the x-axis direction of a plurality of pixel coordinates as 145 and 100, and calculating the distance between each control point on the x-axis according to the number of the control points on the upper half part, the maximum value max _ x and the minimum value min _ x, wherein the distance is (145-100)/14-3;
traversing min _ x to max _ x to 145, calculating a coordinate y value corresponding to the pixel coordinate x value at intervals of 3 pixels, and obtaining control points (100, 25) and (100, 32) of two prostate tissue contour lines until the traversal is finished to obtain a first characteristic point;
obtaining an x-axis Center coordinate x _ Center-122 according to the minimum value min _ x and the maximum value max _ x, and obtaining a y-axis Center coordinate y _ Center-28 of the prostate tissue in the same way to form a second feature point (122, 28);
forming the characteristic points of the prostate contour by the first characteristic points and the second characteristic points, and using the characteristic points as initial point coordinates of a network training data set; meanwhile, the characteristic points of the prostate tissue contour line of each nuclear magnetic resonance image are used for making a heat image, the background pixel intensity of the heat image is set to be 0, and the characteristic point pixel intensity is set to be 200, so that an initial data set is obtained.
Step 3, normalizing and enhancing the data set, wherein because the number of images in the data set is small, the method of enhancing the data is adopted, each pair of the labeled nuclear magnetic resonance images and the thermal images which are trained is subjected to horizontal overturning, vertical overturning and horizontal and vertical overturning, and the nuclear magnetic resonance images and the thermal images and the original data are used as a 4-time data set for network training; building a U-Net network, initializing training parameters, training 12 epochs by the network, learning by adopting Adam by an optimizerThe rate is set to 0.0001 and a custom loss function is employed, the lambda of which is1Is set to 0.5, lambda21.5, p is set to 2.5, ω is 0.0001;
step 4, inputting the data set into a U-Net network, taking the labeled nuclear magnetic resonance image as an input image and the heat image as a label, and performing multiple training to obtain a first network model;
inputting the original nuclear magnetic resonance image into a first network model for prediction to obtain predicted feature points, marking the predicted feature points on the original nuclear magnetic resonance image, and manually adjusting the predicted feature points to obtain a second data set;
inputting the second data set into the first network model to train for 12 epochs, and enabling the model to continuously learn the characteristics of the characteristic points in the heat map to obtain a second network model;
inputting the second data set into a second network model for prediction to obtain predicted feature points, marking the predicted feature points on the original nuclear magnetic resonance image, and manually adjusting the predicted feature points to obtain a third data set;
and inputting the third data set into a second network model for training, so that the model tends to converge, and a final network model is obtained. After the two times of adjustment, the characteristic information of the control point is continuously learned, and the capability of the model for predicting the control point is improved.
And 5, inputting the original nuclear magnetic resonance image into a network model for prediction to obtain predicted characteristic points, and connecting the predicted characteristic points of the outermost contour to obtain a prostate tissue contour line, as shown in fig. 3.

Claims (6)

1. A method for detecting control points of a prostate tissue contour line is characterized by comprising the following steps:
step 1, labeling prostate tissue contour lines of a plurality of original nuclear magnetic resonance images to obtain a plurality of prostate tissue contour lines, and extracting pixel coordinates of each prostate tissue contour line;
step 2, selecting characteristic points in each pixel coordinate of the prostate tissue contour line, and generating a corresponding heat map to obtain a data set;
step 3, constructing a U-Net network, and initializing training parameters;
step 4, inputting the data set into a U-Net network for training to obtain a network model;
and 5, inputting the original nuclear magnetic resonance image into a network model to predict to obtain predicted characteristic points, and connecting the predicted characteristic points to obtain a prostate tissue contour line.
2. The method for detecting the control points of the prostate tissue contour line according to claim 1, wherein the step 2 comprises the following steps:
step 2.1, counting the number of pixel coordinates of each prostate tissue contour line, selecting a plurality of pixel coordinates as prostate contour line control points according to a proportion, wherein the prostate contour line control points are averagely divided into upper half control points and lower half control points;
step 2.2, obtaining a maximum value max _ x and a minimum value min _ x of the x-axis direction in a plurality of pixel coordinates, and calculating the distance of each control point of the x-axis according to the number of the control points of the upper half part, the maximum value max _ x and the minimum value min _ x;
step 2.3, traversing the minimum value min _ x to the maximum value max _ x, and calculating the coordinate y value corresponding to the pixel coordinate x value at intervals of one interval to obtain the control points (xi, y) of the two prostate tissue contour linesupi)、(xi,ydowni) Until the traversal is finished, forming a first feature point;
step 2.4, obtaining an x-axis central point coordinate x _ Center and a y-axis central point coordinate y _ Center according to the minimum value min _ x and the maximum value max _ x to form a second characteristic point;
step 2.5, forming the first characteristic points and the second characteristic points into characteristic points which serve as initial point coordinates of a network training data set; meanwhile, the characteristic points of the prostate tissue contour line of each nuclear magnetic resonance image are used for making a heat image, and the background pixel intensity and the characteristic point pixel intensity of the heat image are set to obtain an initial data set.
3. The method of claim 1, wherein said data set is normalized and data enhanced prior to step 3.
4. The method of claim 1, wherein the U-Net network of step 3 uses a custom loss function, the custom loss function being:
LOSS=λ1*LOSS12*LOSS2 (2);
Figure FDA0003015759610000021
Figure FDA0003015759610000022
in the above equation, ω is a penalty term, and abs () is an absolute value.
5. The method for detecting the control points of the prostate tissue contour line according to claim 1, wherein the step 4 comprises the following steps:
step 4.1, inputting the data set into a U-Net network, taking the labeled nuclear magnetic resonance image as an input picture and the heat image as a label, and performing multiple training to obtain a first network model;
step 4.2, inputting the original nuclear magnetic resonance image into a first network model for prediction to obtain predicted feature points, marking the predicted feature points on the original nuclear magnetic resonance image, and manually adjusting the predicted feature points to obtain a second data set;
4.3, inputting the second data set into a first network model for training to obtain a second network model;
and 4.4, repeating the steps 4.2-4.3 until the model tends to converge to obtain the final network model.
6. The method for detecting the control points of the prostate tissue contour line according to claim 1, wherein the step 5 is specifically as follows: inputting the original nuclear magnetic resonance image into a network model for prediction to obtain predicted characteristic points, and connecting the predicted characteristic points of the outermost contour to obtain a prostate tissue contour line.
CN202110387981.XA 2021-04-12 2021-04-12 Method for detecting control points of outline of prostate tissue Active CN113298754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110387981.XA CN113298754B (en) 2021-04-12 2021-04-12 Method for detecting control points of outline of prostate tissue

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110387981.XA CN113298754B (en) 2021-04-12 2021-04-12 Method for detecting control points of outline of prostate tissue

Publications (2)

Publication Number Publication Date
CN113298754A true CN113298754A (en) 2021-08-24
CN113298754B CN113298754B (en) 2024-02-06

Family

ID=77319589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110387981.XA Active CN113298754B (en) 2021-04-12 2021-04-12 Method for detecting control points of outline of prostate tissue

Country Status (1)

Country Link
CN (1) CN113298754B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310627A (en) * 2023-01-16 2023-06-23 北京医准智能科技有限公司 Model training method, contour prediction device, electronic equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3053487A1 (en) * 2017-02-22 2018-08-30 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks
CN109636806A (en) * 2018-11-22 2019-04-16 浙江大学山东工业技术研究院 A kind of three-dimensional NMR pancreas image partition method based on multistep study
CN109919216A (en) * 2019-02-28 2019-06-21 合肥工业大学 A kind of confrontation learning method for computer-aided diagnosis prostate cancer
CN110008992A (en) * 2019-02-28 2019-07-12 合肥工业大学 A kind of deep learning method for prostate cancer auxiliary diagnosis
US20200020082A1 (en) * 2018-07-10 2020-01-16 The Board Of Trustees Of The Leland Stanford Junior University Un-supervised convolutional neural network for distortion map estimation and correction in MRI
CN111754472A (en) * 2020-06-15 2020-10-09 南京冠纬健康科技有限公司 Pulmonary nodule detection method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3053487A1 (en) * 2017-02-22 2018-08-30 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks
US20200020082A1 (en) * 2018-07-10 2020-01-16 The Board Of Trustees Of The Leland Stanford Junior University Un-supervised convolutional neural network for distortion map estimation and correction in MRI
CN109636806A (en) * 2018-11-22 2019-04-16 浙江大学山东工业技术研究院 A kind of three-dimensional NMR pancreas image partition method based on multistep study
CN109919216A (en) * 2019-02-28 2019-06-21 合肥工业大学 A kind of confrontation learning method for computer-aided diagnosis prostate cancer
CN110008992A (en) * 2019-02-28 2019-07-12 合肥工业大学 A kind of deep learning method for prostate cancer auxiliary diagnosis
CN111754472A (en) * 2020-06-15 2020-10-09 南京冠纬健康科技有限公司 Pulmonary nodule detection method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
凌彤;杨琬琪;杨明;: "利用多模态U形网络的CT图像前列腺分割", 智能系统学报, no. 06 *
刘云鹏;刘光品;王仁芳;金冉;孙德超;邱虹;董晨;李瑾;洪国斌;: "深度学习结合影像组学的肝脏肿瘤CT分割", 中国图象图形学报, no. 10 *
詹曙;梁植程;谢栋栋;: "前列腺磁共振图像分割的反卷积神经网络方法", 中国图象图形学报, no. 04 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310627A (en) * 2023-01-16 2023-06-23 北京医准智能科技有限公司 Model training method, contour prediction device, electronic equipment and medium
CN116310627B (en) * 2023-01-16 2024-02-02 浙江医准智能科技有限公司 Model training method, contour prediction device, electronic equipment and medium

Also Published As

Publication number Publication date
CN113298754B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
US11645748B2 (en) Three-dimensional automatic location system for epileptogenic focus based on deep learning
JP6947759B2 (en) Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects
Kim et al. Enhanced airway-tissue boundary segmentation for real-time magnetic resonance imaging data
El-Baz et al. Appearance analysis for diagnosing malignant lung nodules
CN109064443B (en) Multi-model organ segmentation method based on abdominal ultrasonic image
CN109035160A (en) The fusion method of medical image and the image detecting method learnt based on fusion medical image
CN110930416A (en) MRI image prostate segmentation method based on U-shaped network
Huang et al. Channel-attention U-Net: Channel attention mechanism for semantic segmentation of esophagus and esophageal cancer
WO2022121100A1 (en) Darts network-based multi-modal medical image fusion method
CN110503626B (en) CT image modality alignment method based on space-semantic significance constraint
Wu et al. Registration of longitudinal brain image sequences with implicit template and spatial–temporal heuristics
CN113420826A (en) Liver focus image processing system and image processing method
Liu et al. Multimodal MRI brain tumor image segmentation using sparse subspace clustering algorithm
CN106056589A (en) Ultrasound contrast perfusion parameter imaging method based on respiratory motion compensation
Cui et al. Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation
Zhang et al. Brain atlas fusion from high-thickness diagnostic magnetic resonance images by learning-based super-resolution
CN116258732A (en) Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images
CN113298830A (en) Acute intracranial ICH region image segmentation method based on self-supervision
CN113298754B (en) Method for detecting control points of outline of prostate tissue
CN109003280A (en) Inner membrance dividing method in a kind of blood vessel of binary channels intravascular ultrasound image
CN103236062B (en) Based on the magnetic resonance image (MRI) blood vessel segmentation system in human brain tumour's nuclear-magnetism storehouse
CN114332910A (en) Human body part segmentation method for similar feature calculation of far infrared image
CN114581499A (en) Multi-modal medical image registration method combining intelligent agent and attention mechanism
CN102663728B (en) Dictionary learning-based medical image interactive joint segmentation
CN113222979A (en) Multi-map-based automatic skull base foramen ovale segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant