AU2020103785A4 - Method for improving recognition rates of mri images of prostate tumors based on cad system - Google Patents
Method for improving recognition rates of mri images of prostate tumors based on cad system Download PDFInfo
- Publication number
- AU2020103785A4 AU2020103785A4 AU2020103785A AU2020103785A AU2020103785A4 AU 2020103785 A4 AU2020103785 A4 AU 2020103785A4 AU 2020103785 A AU2020103785 A AU 2020103785A AU 2020103785 A AU2020103785 A AU 2020103785A AU 2020103785 A4 AU2020103785 A4 AU 2020103785A4
- Authority
- AU
- Australia
- Prior art keywords
- features
- algorithm
- prostate
- mri images
- prostate tumors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4842—Monitoring progression or stage of a disease
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
- A61B2576/02—Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30081—Prostate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/031—Recognition of patterns in medical or anatomical images of internal organs
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
CN 104794426 A Abstract Page 1/1
The present invention belongs to the field of medical apparatus for prostate
diseases, and particularly relates to a method for improving recognition rates of MRI
images of prostate tumors based on a CAD system. The method comprises the
following steps: (1) collecting MRI images of a prostate patient; (2) extracting ROI
region features of an MRI prostate tumor; (3) performing feature-level fusion on the
ROI region features; and (4) using neural networks as classifiers to classify and
recognize the fused features. The method makes the capabilities of recognizing benign
and malignant prostate tumors be improved by at least 10%, and has active significance
for CAD of MRI prostate tumors.
CN 104794426 A Drawings of Description Page 1/3
Fig.1I Fig. 2 Fig. 3
E~
Fig. 4 Fig5
...............
Fig 8 Fig I9
Description
CN 104794426 A Drawings of Description Page 1/3
Fig.1I Fig. 2 Fig. 3
E~
Fig. 4 Fig5
Fig 8 Fig I9
CN 104794426 A Description Page 10/10
METHOD FOR IMPROVING RECOGNITION RATES OF MRI IMAGES OF PROSTATE TUMORS BASED ON CAD SYSTEM Technical Field The present invention belongs to the field of medical apparatus for prostate diseases, and particularly relates to a method for improving recognition rates of MRI images of prostate tumors based on a CAD system. Background Prostate tumors mainly include prostate cancer and prostate hyperplasia which occur in the prostate, wherein prostate cancer is a common malignant tumor. Under
normal circumstances, prostate hyperplasia may not turn into prostate cancer. Prostate
cancer is usually discovered clinically through abnormal digital rectal examination (DRE) and elevated serum prostate-specific antigen (PSA). Once prostate cancer is discovered, the treatment method therefor mainly depends on the histological classification and clinical staging of tumors. The TNM staging and Gleason staging of prostate cancer specify the extent of disease. Correct staging is very important for determining whether to operate, selecting treatment methods, and judging prognosis. MRI has the advantages of three-dimensional imaging, good soft tissue contrast ratio, no biological damage, and no need to inject contrast agents to show the vascular structure, can distinguish the peripheral zone of the prostate from the central gland, and provide sections required in different directions, so as to facilitate the understanding of the whole prostate and the surrounding relationship, which is not only conducive to identification and staging to determine the correct treatment policy, but also to the smooth implementation of the surgical treatment plan and postoperative observation. Therefore, MRI plays an important role in positioning of prostate cancer, examinations of the extent of cancer tissue, whether it penetrates the capsule, whether there is distant metastasis, etc. A Computer-Aided Diagnosis (CAD) system is a technical means that can provide quantitative analysis, reduce the workload of doctors in diagnosis, and provide doctors with diagnosis references and suggestions with good consistency and repeatability, in order to improve the diagnosis effect, reduce the number of times of biopsies, and improve the efficiency and objectivity of diagnosis. A large number of studies also show that by means of the CAD system, the film reading quality of doctors is significantly improved indeed. The CAD method based on medical images is a target recognition technology from a technical point of view. However, the existing target
CN 104794426 A Description Page 10/10
recognition methods are only divided according to single sample feature vectors, and high-dimension pattern features exhibited by a large number of similar samples are not fully considered. Therefore, the recognition results obtained by extracting only features of several dimensions or more than ten dimensions for the ROI regions of the MRI prostate tumor are not reliable. Although there are related literatures on the CAD of prostate tumors to a certain extent, compared with other organs of the human body (such as galactophore, brain, etc.), there are few reports on the CAD of prostate, and these conclusions are only obtained under the conditions of small samples and limited extraction features, having the disadvantages of too high false positive rate in general, and lacking prospective studies on more convincing large samples. Therefore, it is a problem urgently to be solved to find an effective method to improve the recognition rate of the CAD system for prostate tumors. Summary The purpose of the present invention is to overcome defects in the prior art and provides a method for improving recognition rates of MRI images of prostate tumors based on a CAD system. The method makes the capabilities of recognizing benign and malignant prostate tumors be improved by at least 10%, and has active significance for CAD of MRI prostate tumors. The purpose of the present invention is realized by the following technical solution: A method for improving recognition rates of MRI images of prostate tumors based on a CAD system, comprising the following steps: (1) collecting MRI images of a prostate patient; (2) extracting ROI region features of an MRI prostate tumor; (3) performing feature-level fusion on the ROI region features; and (4) using neural networks as classifiers to classify and recognize the fused features. In step (2), the ROI region features include geometric features, statistical features, Hu invariant moment features, gray-level co-occurrence matrix texture features,
TAMURA texture features, and frequency domain features. In step (2), the geometric features include: area, perimeter, rectangularity, elongation, circularity, Euler number; the statistical features include: mean, variance, gradient, kurtosis, energy, entropy; the Hu invariant moment features include: C1, C2, C3, C4, C5, C6, C7; the gray-level co-occurrence matrix texture features include: energy, contrast ratio
CN 104794426 A Description Page 10/10
(contrast, inertia), entropy, sum entropy, difference entropy, correlation, inverse
difference moment, variance, variance of sum, mean of sum, difference variance, related information measure (2 dimensions), and maximum correlation coefficient; the TAMURA texture features include: roughness, contrast ratio, and directionality; the frequency domain features include: energy 8 dimensions (wavelet characteristic energy 1-8), norm 8 dimensions (wavelet characteristic norm 1-8), standard deviation 8 dimensions (wavelet feature standard deviation 1-8); and in step (3), the algorithm used in the feature-level fusion is a linear dimension-decreasing algorithm. The linear dimension-decreasing algorithm is a principal component analysis (PCA) algorithm. In step (4), the training algorithms used by the neural networks include a BFGS quasi-Newton algorithm, a BP algorithm, a steepest gradient descent algorithm and a Levenberg-Marquardt algorithm. The present invention has the advantageous effects that: the present invention proposes a method for improving recognition rates of MRI images of prostate tumors based on a CAD system. This method uses a prostate tumor CAD model of the neural network of feature-level fusion based on PCA, uses PCA to transform at the feature level, reduces dimensions of feature vectors, and uses the neural network to perform classification and recognition, so the recognition rates of the images after feature fusion based on PCA are significantly improved by 12.5%, 17.18%, 22.22% and 22.22% respectively, indicating that the principal component analysis algorithm adopted in the present invention is effective in feature-level fusion, which not only reduces the redundancy between features, eliminates the influence of some abnormal data in the features on the experimental results, but also further improves the recognition rates of MRI images of prostate tumors. The method makes the capabilities of recognizing benign and malignant prostate tumors be improved by at least 10%, and has active significance for the CAD of MRI prostate tumors. Description of Drawings Fig. 1 shows the ROI region of prostate cancer; Fig. 2 shows the ROI region of prostate hyperplasia; Fig. 3 shows the accumulative contribution rate of all principal components;
CN 104794426 A Description Page 10/10
Fig. 4 shows the test results under the L-M training algorithm before PCA fusion; Fig. 5 shows the training errors under the L-M training algorithm before PCA fusion; Fig. 6 shows the test results under the quasi-Newton algorithm before PCA fusion; Fig. 7 shows the training errors under the quasi-Newton algorithm before PCA fusion; Fig. 8 shows the test results under the BP algorithm before PCA fusion; Fig. 9 shows the training errors under the BP algorithm before PCA fusion; Fig. 10 shows the test results under the steepest gradient descent algorithm before PCA fusion; Fig. 11 shows the training errors under the steepest gradient descent algorithm before PCA fusion; Fig. 12 shows the test results under the L-M training algorithm after PCA fusion; Fig. 13 shows the training errors under the L-M training algorithm after PCA fusion; Fig. 14 shows the test results under the quasi-Newton algorithm after PCA fusion; Fig. 15 shows the training errors under the quasi-Newton algorithm after PCA fusion; Fig. 16 shows the test results under the BP algorithm after PCA fusion; Fig. 17 shows the training errors under the BP algorithm after PCA fusion; Fig. 18 shows the test results under the steepest gradient descent algorithm after PCA fusion; and Fig. 19 shows the training errors under the steepest gradient descent algorithm after PCA fusion. Detailed Description The technical solution of the present invention will be further described below in combination with the drawings and specific embodiments. Embodiment 1 The software and hardware environments involved in the present invention are as
follows: Software environment: windows XP operating system, MATLAB 7. ONN Toolbox, efilm 3.4 Hardware environment: 2G memory, 320G hard disk, M320-AMD processor Input: 1) ROI images Xi of MRI prostate tumor, i = 1, 2,..., 180
CN 104794426 A Description Page 10/10
2) Sample category number n = 2; Output: neural network recognition accuracy before and after feature
transformation under four training algorithms
Steps:
[0046] Begin for i=l:M //M is the number of samples, extracting features of 102 dimensions of all samples Tii= Geometry (Xi); //Ti is the subspace of geometric features spanned by vectors of features of 6 dimensions; T 2 i= Statistical (Xi);
//T 2 is the subspace of statistical features spanned by vectors of features 6 dimensions; T 3i= Moment (Xi);
//T 3 is the subspace of invariant moment features spanned by vectors of features of 7 dimensions; T 4i= GLCM (Xi);
//T 4 is the subspace of gray-level co-occurrence matrix texture features spanned by vectors of features of 56 dimensions; T 5 i= TAMURA (Xi); //T 5 is the subspace of TAMURA texture features spanned by vectors of features of 3 dimensions; T 6 i =Frequency (X i):
//T 6 is the subspace of frequency domain features spanned by vectors of features of 24 dimensions; end T={Ti T 2 T 3 T 4 T5 T6 }; //combining Ti, T 2 , T 3, T 4 , T 5, T 6 to form a 102-dimension feature space T describing the ROI region; PCAT=PCA(T); //performing PCA transformation on the feature space T, to obtain a transformation space PCA_T //respectively performing cross validation in the two spaces T and PCA_T with neural networks
for i=l:K //performing K-fold cross validation
CN 104794426 A Description Page 10/10
RecBFGSl(i)=NNBFGS (T(i)); //recognizing in the T(i) space with a neural network of the quasi-Newton algorithm RecBFGS2(i)=NNBFGS (PCAT(i)); //recognizing in the PCAT(i) space with a neural network of the quasi-Newton algorithm RecLMl(i)=NNLM (T(i)); //recognizing in the T(i) space with a neural network of the Levenberg-Marquardt training algorithm
RecLM2(i)=NNLM (PCAT(i)); //recognizing in the PCAT(i) space with a neural network of the Levenberg-Marquardt training algorithm
RecBPl(i)=NNBP(T(i)) //recognizing in the T(i) space with a neural network of the BP algorithm RecBP2(i)=NNBP(PCAT(i)) //recognizing in the PCAT(i) space with a neural network of the BP algorithm RecGD1(i)=NNGD (T(i)) //recognizing in the T(i) space with a neural network of the gradient descent algorithm RecGD2(i)=NNGD (PCAT(i)) //recognizing in the PCAT(i) space with a neural network of the gradient descent algorithm End, sum 1=0; sum2=0; sum3=0; sum4=0;
sum5=0; sum6=0; sum7=0; sum8=0;
for i=l:K //calculating the average recognition accuracy suml=suml+RecBFGSl(i); sum2=sum2+RecBFGS2(i) sum3=sum3+RecLM1(i); sum4=sum4+RecLM2(i); sum5=sum5+RecBP1(i): sum6=sum6+RecBP2(i); sum7=sum7+RecGD1(i); sum8=sum8+RecGD 1 (i); end, sum1=suml/K; sum2=sum2/K;
sum3= sum3/K; sum4=sum4/K;
CN 104794426 A Description Page 10/10
sum5=sum5/K; sum6=sum6/K;
sum7=sum7/K; sum8=sum8/K;
end; the specific steps are as follows: (1) Collecting MRI images of a prostate patient: extracting the ROI regions in 180 prostate MRI images (including 90 prostate cancer images and 90 prostate hyperplasia images) with distinguishing values, wherein Fig. 1 shows an ROI region of one prostate cancer MRI image, and Fig. 2 shows an ROI region of one prostatic hyperplasia MRI image. (2) Extracting ROI features of MRI prostate tumor images: extracting six categories of features of 102 dimensions in total from the ROI regions of each MRI prostate tumor image, including geometric features of 6 dimensions, statistical features of 6 dimensions, Hu invariant moment features of 7 dimensions, gray-level co-occurrence matrix texture features of 56 dimensions, TAMURA texture features of 3 dimensions, and frequency domain features of 24 dimensions. Table 1 shows the features of 102 dimensions extracted from the ROI region of one prostate cancer MRI image and the ROI region of one prostate hyperplasia MRI image. Table 1 Feature values of ROI regions in prostate MRI images Wavelet packet-based GeometricGray-levelInvariant TAVIURA GLCM-based texture feature texture feature feature feature moment texture 0° 45° 90° 135° Energy Norm Standard deviation 460.000 45.2239 3.1277 14.6182 0.1218 0.1097 0.1270 0.1114 0.9987 433.801 28.4479 5606.375 562.852 3.9524 18,1277 2.544 2.6947 2.5235 2.7038 0,0497 96.77 10.3115 0.3329 0.4866 2,2765 18.6427 0.3258 0.5064 03399 0.56220.0038 26.929 2.8706 0.7672 2.9329 0.4466 0.4121 0.3956 0.4083 0.3900 0.0138 50.919 5.4273 0.9655 107.247 1.6797 0.11% 0.1067 0.1253 0.1088 0.0001 4.969 0.5297 7.000 0.5823 1.8200 0.3258 0.5064 0.3399 0.5622 0.0003 8,119 0.8654 1.6332 6.6926 6.7014 6.6796 6.7013 0.0030 23,779 2.5348 2.2639 2.2970 2.2383 2.2820 0.0014 10.345 1.7424 0.6416 0.7743 0.6375 0.7932 28.292 27.863 28.447 27.939 0.3904 0.5468 0.4132 0.6046 -0.567 -0.4808 -0.58 -0.476 0.9308 0.9046 0.9342 0.902R 0.4549 0.4654 0.456 0.4685
S532.0000 42.0315 3.0326 14.7933 0.1405 0,1238 0.1394 0,1207
CN 104794426 A Description Page 10/10
4765.500 445.205 4.1593 15.2952 2.4094 2.5649 2.3995 2.5754 0.2116 0.7161 1.4860 32.2385 0.3083 0.4747 0.2974 0.4669 0.7564 3.6207 1.0317 0.5076 0.4815 0.5051 0.4827 0.9981 142.13 6.8906 0.8929 120.45 1.6928 0.1376 0.1195 0.1364 0-1159 0.0611 35.177 3.7491 4.0000 0.734 2.7209 0.3083 0.4747 0.2974 0.4669 0.0031 7.9512 0.8476 2.2763 6.3265 6.3478 6.3102 6.3478 0.0048 9.B057 1.0451 2.1503 2.1891 2.L604 2.1993 0.0003 2.4370 0.2598 0.6284 0.7602 0.6155 0.7617 0.0011 4.7929 0.5109 24.338 24.034 24.195 23.958 0.002 S7.5359 0.8033 0.3701 0.5102 0.3572 0.4929 0.0022 6.6700 0.7110 -0,543 -0.4511 -0.554 -0.445 0.913 0.8806 0.917 0.8779 0.498 0.5069 0.4941 0.5066 (3) Performing feature-level fusion on the ROI region features: respectively performing PCA analysis on prostate cancer and hyperplasia feature libraries, wherein according to the PCA theory, the accumulative contribution rate of principal components is required to reach 85% in general, (data cited references: Xueren Wang, Songgui Wang. Practical Multivariate Statistical Analysis. Shanghai Science and Technology Press, 1990; Weiquan Yang, Lanting Liu, Hongzhou Lin. Multivariate Statistical Analysis. Higher Education Press, 1989). In the PCA (principal component analysis) algorithm, the accumulative contribution rate of the components of the top ten dimensions is calculated. See Table 2 and Fig. 3 for the analysis of the contribution rate of each principal component and the accumulative contribution rate obtained by feature-level data fusion respectively. As shown in Table 2, when the dimensions of the feature vectors after feature-level fusion are greater than or equal to 6 dimensions, the accumulative contribution rate thereof has reached 85.623%, which is greater than 85% reported in the Reference. In this method, in the case where the algorithm time overhead is not increased significantly, in order to further improve the recognition rate, principal components of 8 dimensions are used, the vectors of 8 dimensions after principal component analysis can represent the feature vectors of 102 dimensions before analysis, realizing the effective dimension-decreasing processing of the feature vectors. Table 2 Contribution rate of each principal component Principal Contribution Accumulative component rate contribution rate 1 47.497 47.497 2 20.071 67.568 3 7.568 75.097 4 _ 4.788 79.885 5 3.119 83.004
CN 104794426 A Description Page 10/10
6 2.619 85.623 7 2.309 87.932 8 1.689 89.621 9 1.251 90.872 10 1.097 91.969 (4) Using neural networks as classifiers to classify and recognize the fused features: using neural networks: respectively using four training algorithms including a BFGS quasi-Newton algorithm, a BP algorithm, a steepest gradient descent algorithm and a Levenberg-Marquardt algorithm to recognize the features of 102 dimensions before the transformation of the prostate tumor and the features of 8 dimensions after the transformation, and using the 3-fold cross validation method to calculate the results. Fig. 4-Fig. 11 show the sample test results of the neural networks under the four training algorithms before PCA fusion, Fig. 12-Fig. 19 show the sample test results of the neural networks under the four training algorithms after PCA fusion, and Table 3 shows the recognition rates under different training functions before and after PCA fusion. Table 3 Recognition rates under different training functions before and after PCA fusion Quasi-Newton Gradient descent L-Malgorithm BPalgorithm algorithm algorithm The first time 58.33% 50% 50% 50% Before PCA The second time 55% 60% 50% 50% fusion The third time 60% 55% 50% 50% Mean 57.78% 55% 50% 50% The first time 60% 61.67% 66.67% 55% After PCA The second time 66.67% 76.67% 63.33% 60% fusion The third time 68.33% 55% 53.33% 68.33% Mean 65% 64.45% 61.11% 61.11% Improved by 12.5% 17.18% 22.22% 22.22%
The experimental results show that under the above four different training functions, the recognition rates of the MRI prostate tumor images after feature fusion based on PCA are significantly improved by 12.5%, 17.18%, 22.22% and 22.22% respectively, indicating that the principal component analysis algorithm is effective in feature-level fusion in the space of features of 102 dimensions of the present invention, which not only reduces the redundancy between features, eliminates the influence of
CN 104794426 A Description Page 10/10
some abnormal data in the features on the experimental results, but also further improves the recognition rates of MRI images of prostate tumors. In the four different training functions, the BP neural network has the best recognition effect on MRI images of prostate tumors, and the recognition rate is relatively stable.
Claims (6)
1. A method for improving recognition rates of MRI images of prostate tumors based on a CAD system, comprising the following steps: (1) collecting MRI images of a prostate patient; (2) extracting ROI region features of an MRI prostate tumor; (3) performing feature-level fusion on the ROI region features; and (4) using neural networks as classifiers to classify and recognize the fused features.
2. The method for improving recognition rates of MRI images of prostate tumors based on a CAD system according to claim 1, wherein in step (2), the ROI region features include geometric features, statistical features, Hu invariant moment features, gray-level co-occurrence matrix texture features, TAMURA texture features, and frequency domain features.
3. The method for improving recognition rates of MRI images of prostate tumors based on a CAD system according to claim 2, wherein in step (2), the geometric features include: area, perimeter, rectangularity, elongation, circularity, Euler number; the statistical features include: mean, variance, gradient, kurtosis, energy, entropy; the Hu invariant moment features include: C1, C2, C3, C4, C5, C6, C7; the gray-level co-occurrence matrix texture features include: energy, contrast ratio, entropy, sum entropy, difference entropy, correlation, inverse difference moment, variance, variance of sum, mean of sum, difference variance, related information measure, and maximum correlation coefficient; the TAMURA texture features include: roughness, contrast ratio, and directionality; and the frequency domain features include: energy, norm, and standard deviation.
4. The method for improving recognition rates of MRI images of prostate tumors based on a CAD system according to claim 1, wherein in step (3), the algorithm used in the feature-level fusion is a linear dimension-decreasing algorithm.
5. The method for improving recognition rates of MRI images of prostate tumors based on a CAD system according to claim 4, wherein the linear dimension-decreasing algorithm is a principal component analysis (PCA) algorithm.
6. The method for improving recognition rates of MRI images of prostate tumors based on a CAD system according to any one of the foregoing claims, wherein in step (4), the training algorithms used by the neural networks include a BFGS quasi-Newton algorithm, a BP algorithm, a steepest gradient descent algorithm and a Levenberg-Marquardt algorithm.
CN 104794426 A Drawings of Description Page 1/3 Nov 2020 2020103785
Fig. 1 Fig. 2 Fig. 3
Fig. 4 Fig. 5
Fig. 6 Fig. 7
Fig. 8 Fig. 9
CN 104794426 A Drawings of Description Page 2/3 Nov 2020 2020103785
Fig. 10 Fig. 11
Fig. 12 Fig. 13
Fig. 14 Fig. 15
Fig. 16 Fig. 17
CN 104794426 A Drawings of Description Page 3/3 Nov 2020
Fig. 18 Fig. 19 2020103785
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020103785A AU2020103785A4 (en) | 2020-11-30 | 2020-11-30 | Method for improving recognition rates of mri images of prostate tumors based on cad system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020103785A AU2020103785A4 (en) | 2020-11-30 | 2020-11-30 | Method for improving recognition rates of mri images of prostate tumors based on cad system |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2020103785A4 true AU2020103785A4 (en) | 2021-02-11 |
Family
ID=74502351
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2020103785A Ceased AU2020103785A4 (en) | 2020-11-30 | 2020-11-30 | Method for improving recognition rates of mri images of prostate tumors based on cad system |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2020103785A4 (en) |
-
2020
- 2020-11-30 AU AU2020103785A patent/AU2020103785A4/en not_active Ceased
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cameron et al. | MAPS: a quantitative radiomics approach for prostate cancer detection | |
Tang et al. | Computer-aided detection and diagnosis of breast cancer with mammography: recent advances | |
Hambarde et al. | Prostate lesion segmentation in MR images using radiomics based deeply supervised U-Net | |
Scheipers et al. | Ultrasonic multifeature tissue characterization for prostate diagnostics | |
JP2009095550A (en) | Diagnosis support apparatus, method for controlling diagnosis support apparatus, and program of the same | |
AU2010280527A1 (en) | Apparatus and method for registering two medical images | |
CN104794426A (en) | Method for improving prostate tumor MRI (Magnetic Resonance Imaging) image identification rate based on CAD (Computer-Aided Diagnosis) system | |
WO2012109658A2 (en) | Systems, methods and computer readable storage mediums storing instructions for segmentation of medical images | |
JP5456132B2 (en) | Diagnosis support device, diagnosis support device control method, and program thereof | |
Hasan et al. | Performance of grey level statistic features versus Gabor wavelet for screening MRI brain tumors: A comparative study | |
Femil et al. | An Efficient Hybrid Optimization for Skin Cancer Detection Using PNN Classifier. | |
Danku et al. | Cancer Diagnosis With the Aid of Artificial Intelligence Modeling Tools | |
Armya et al. | Medical images segmentation based on unsupervised algorithms: a review | |
CN112508943A (en) | Breast tumor identification method based on ultrasonic image | |
AU2020103785A4 (en) | Method for improving recognition rates of mri images of prostate tumors based on cad system | |
Vocaturo et al. | Artificial intelligence approaches on ultrasound for breast cancer diagnosis | |
CN115797308A (en) | DCE-MRI-based breast tumor segmentation method | |
Chen et al. | Predictions for Central Lymph Node Metastasis of Papillary Thyroid Carcinoma via CNN-Based Fusion Modeling of Ultrasound Images. | |
Ding et al. | A novel wavelet-transform-based convolution classification network for cervical lymph node metastasis of papillary thyroid carcinoma in ultrasound images | |
Sarkar et al. | Review of Artificial Intelligence methods for detecting cancer in medical image processing | |
Sheppard et al. | Efficient image texture analysis and classification for prostate ultrasound diagnosis | |
Seyed Abolghasemi et al. | Accuracy Improvement of Breast Tumor Detection based on Dimension Reduction in the Spatial and Edge Features and Edge Structure in the Image | |
Kesana et al. | Brain Tumor Detection Using YOLOv5 and Faster R-CNN | |
KR102490461B1 (en) | Target data prediction method using correlation information based on multi medical image | |
Xu et al. | Novel Robust Automatic Brain-Tumor Detection and Segmentation Using Magnetic Resonance Imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |