CN115830005A - Prostate margin positive prediction method fusing 3D feature calculation - Google Patents

Prostate margin positive prediction method fusing 3D feature calculation Download PDF

Info

Publication number
CN115830005A
CN115830005A CN202310029875.3A CN202310029875A CN115830005A CN 115830005 A CN115830005 A CN 115830005A CN 202310029875 A CN202310029875 A CN 202310029875A CN 115830005 A CN115830005 A CN 115830005A
Authority
CN
China
Prior art keywords
prostate
training
image
medical
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310029875.3A
Other languages
Chinese (zh)
Inventor
林劼
肖新宇
梁玉龙
白毅
曾祥雨
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202310029875.3A priority Critical patent/CN115830005A/en
Publication of CN115830005A publication Critical patent/CN115830005A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a prostate margin positive prediction method fusing 3D feature calculation, which comprises the steps of preprocessing a prostate medical image; training an image generation model; generating prostate 3D data; and training the 3D deep learning model. The medical image preprocessing is used for converting the medical image into a data set suitable for deep learning; training the image generation model, namely training the image generation model capable of generating the medical image through a data set; generating 3D data of the prostate, and generating a complete medical image which can be used for 3D recovery through partial medical images; and training the 3D deep learning model, training the characteristics of the prostate 3D model, extracting a deep neural network, and finally obtaining the neural network and machine learning parameters. The invention can be used for prostate cancer postoperative margin positive prediction and has better accuracy.

Description

Prostate margin positive prediction method fusing 3D feature calculation
Technical Field
The invention relates to the field of artificial intelligence, in particular to a prostate margin positive prediction method fusing 3D feature calculation.
Background
In recent years, with the increasing maturity of artificial intelligence technology, research on deep neural models has achieved much effort and is applied to aspects of life such as electronic commerce, stock market finance fields, social networks, and medical health fields. Since the digital image age, the generation of massive data provides more possibilities for the future development of medical images. Therefore, how to further analyze and mine the medical image big data, how to extract valuable information from the medical image high-dimensional data, and how to closely combine the development of modern medical images with precise medical treatment become important topics for the future development of medical images.
Margin positive is one of the problems often encountered with radical prostatectomy. Due to the size and the position of the tumor, the anatomical characteristics of the prostate and other reasons, the tumor is not completely resected, so that the pathological specimen is positive in incisal margin, and the prognosis and the treatment strategy of a patient are influenced. Therefore, preoperatively identifying risk factors of margin positivity and knowing characteristics of the margin positivity are significant, and in recent years, a nomogram model for predicting risk of prostate cancer surgical margin positivity is established by many foreign researches based on a plurality of parameters. However, because the incidence of prostate cancer varies between different countries and regions, models in other countries cannot be directly applied to patients in China. At present, no deep learning model for predicting the postoperation margin positivity of the prostate cancer based on an actual sample exists in China, so that the deep learning model for training the postoperation margin positivity of the prostate cancer based on a small sample is provided, and the deep learning model for predicting the postoperation margin positivity of the prostate cancer and other data of a patient are utilized to provide a basis for a clinician.
Disclosure of Invention
The invention aims to solve the problem of prostate cancer postoperative margin positive prediction and provides a prostate margin positive prediction method fused with 3D feature calculation.
The purpose of the invention is realized by the following technical scheme:
a prostate margin positive prediction method fused with 3D feature calculation comprises the following steps:
step S1, preprocessing a prostate medical image;
s2, training an image generation model;
step S3, generating prostate 3D data;
and S4, training a 3D deep learning model.
Specifically, the prostate medical image preprocessing further comprises the following sub-steps:
s11, screening out a prostate protrusion coronal bitmap from the complete prostate medical layered image;
step S12, selecting three images of the forefront, the last and the middle from the prostate protrusion coronal bitmap;
step S13, arranging the prostate protrusion coronal bitmap according to a horizontal position, and selecting three images of the forefront, the last and the middle;
step S14, arranging the prostate protrusion coronal bitmap according to sagittal positions, and selecting three pictures of the leftmost picture, the rightmost picture and the middle picture;
and S15, respectively collecting the pictures of the three angles for converting the medical image into a data set suitable for deep neural network learning.
Specifically, the image generation model training inputting the data set into the image generation network and training the image generation model capable of generating the intermediate medical image comprises the following substeps:
s21, compiling an Encoder structure;
step S22, determining the vector size of the middle layer, and setting a default value as an array with the length of 100;
s23, compiling a Decoder structure;
s24, constructing a loss function of training;
and S25, training the network by taking the data set as a training set, selecting a proper iteration number, setting a default value as 100, and obtaining a generated network parameter after training.
Specifically, the Encoder structure is composed of four downsampling layers and a convolution layer; the down-sampling layer consists of three convolution layers and a pooling layer;
the Decoder structure consists of four upper sampling layers; the upsampling layer consists of three convolutional layers and an upsampling function.
Specifically, the loss function is:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 229757DEST_PATH_IMAGE002
in order to be the variance of the added noise,
Figure DEST_PATH_IMAGE003
for the original output vector of the Encode,
Figure 985485DEST_PATH_IMAGE004
is the middle layer vector dimension.
Specifically, the generating of the prostate 3D data includes the following sub-steps of inputting two medical images into a trained image generation network model, generating a series of medical images between the two medical images, and completely generating a medical image which can be used for 3D feature analysis based on the series of medical images, and includes:
s31, intercepting an Encoder part of the generated network, and inputting two adjacent medical images into the Encoder to obtain a middle layer vector;
s32, averaging the intermediate layer vectors of the two adjacent medical images to obtain an intermediate layer vector of a to-be-generated image;
s33, inputting the intermediate layer vector of the generated picture into a Decoder to generate an intermediate image;
s34, repeating the steps until enough intermediate images are generated;
the default number of sufficient intermediate images is 64 in total.
Specifically, the training 3D deep learning model takes a series of medical images with complete medical image groups, which can be used for 3D feature analysis, and other features of a patient as input, and takes the cut edge positive probability as output, and trains a 3D deep learning neural network, including the following sub-steps:
s41, constructing a 3D deep learning network and outputting a feature vector;
step S42, combining the output feature vector in the step S41 and other relevant features of the patient into a new feature vector;
and S43, taking the feature vector obtained in the S42 as input, taking the incisal edge positive result of the patient as output, and training in a machine learning algorithm to finally obtain a 3D deep learning neural network and machine learning parameters.
Specifically, the input scale of the 3D deep learning network is 3 × 64 channels, and the output scale is an array with a length of 50 units.
Specifically, the other relevant characteristics of the patient include age, blood pressure and other data strongly correlated with the diagnosis result.
Specifically, the machine learning algorithm includes a logistic regression algorithm.
The invention has the beneficial effects that:
the invention provides a prostate margin positive prediction method fusing 3D feature calculation, which is characterized in that a prediction model of a diagnosis result is obtained through a deep neural network technology on the basis of medical history data, and the model can be used for effectively predicting the postoperative margin positive result of a new patient and providing reference for the patient and medical staff.
Drawings
FIG. 1 is a system block diagram of a prostate margin positive prediction method incorporating 3D feature computation according to the present invention;
FIG. 2 is a schematic diagram of the training of the image-generating neural network of the present invention;
FIG. 3 is a schematic diagram of an Encoder structure of an image generation neural network according to the present invention;
FIG. 4 is a schematic diagram of image interpolation of the image-generating neural network of the present invention;
FIG. 5 is a schematic diagram of the training of the 3D feature extraction network of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. In addition, technical solutions between the embodiments may be combined with each other, but must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory to each other or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
As shown in fig. 1, a prostate margin positive prediction method fused with 3D feature calculation includes the following steps:
step S1, preprocessing a prostate medical image;
s2, training an image generation model;
step S3, generating prostate 3D data;
and S4, training a 3D deep learning model.
Specifically, the prostate medical image preprocessing further comprises the following sub-steps:
s11, screening out a prostate protrusion coronal bitmap from the complete prostate medical layered image;
step S12, selecting three images of the forefront, the last and the middle from the prostate protrusion coronal bitmap;
step S13, arranging the prostate protrusion coronal bitmap according to a horizontal position, and selecting three images of the forefront, the last and the middle;
step S14, arranging the coronal view of the prostate protrusion according to sagittal positions, and selecting three pictures of the leftmost, the rightmost and the middle;
and S15, respectively collecting the pictures of the three angles for converting the medical image into a data set suitable for deep neural network learning.
Specifically, as shown in fig. 2, the training of the image generation model to input the data set into the image generation network and train the image generation model capable of generating the intermediate medical image includes the following sub-steps:
s21, compiling an Encoder structure;
step S22, determining the vector size of the middle layer, and setting a default value as an array with the length of 100;
s23, compiling a Decoder structure;
s24, constructing a loss function of training;
and S25, training the network by taking the data set as a training set, selecting a proper iteration number, setting a default value as 100, and obtaining a generated network parameter after training.
Specifically, as shown in fig. 3, the Encoder structure is composed of four downsampling layers and a convolutional layer; the down-sampling layer consists of three convolution layers and a pooling layer;
the Decoder structure consists of four upper sampling layers; the upsampling layer consists of three convolutional layers and an upsampling function.
Specifically, the loss function is:
Figure 268699DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE005
in order to be the variance of the added noise,
Figure 153479DEST_PATH_IMAGE003
for the original output vector of the Encode,
Figure 748408DEST_PATH_IMAGE004
is the middle layer vector dimension.
Specifically, as shown in fig. 4, the generating of 3D prostate data by inputting two medical images into a trained image generation network model to generate a series of medical images among the two medical images, and based on the series of medical images, a complete medical image that can be used for 3D feature analysis includes the following sub-steps:
s31, intercepting an Encoder part of the generated network, and inputting two adjacent medical images into the Encoder to obtain a middle layer vector;
s32, averaging the intermediate layer vectors of the two adjacent medical images to obtain an intermediate layer vector of a to-be-generated image;
s33, inputting the intermediate layer vector of the generated picture into a Decoder to generate an intermediate image;
s34, repeating the steps until enough intermediate images are generated;
the default number of sufficient intermediate images is 64 in total.
Specifically, as shown in fig. 5, the training 3D deep learning model takes a series of medical images, which are complete in medical image group and can be used for 3D feature analysis, and other features of a patient as input, and takes the probability of positive incisional edge as output, and trains a 3D deep learning neural network, including the following sub-steps:
s41, constructing a 3D deep learning network and outputting a feature vector;
step S42, combining the output feature vector in the step S41 and other relevant features of the patient into a new feature vector;
and S43, taking the feature vector obtained in the S42 as input, taking the incisal edge positive result of the patient as output, and training in a machine learning algorithm to finally obtain a 3D deep learning neural network and machine learning parameters.
Specifically, the input scale of the 3D deep learning network is 3 × 64 channels, and the output scale is an array with a length of 50 units.
Specifically, the other relevant characteristics of the patient include age, blood pressure and other data strongly correlated with the diagnosis result.
Specifically, the machine learning algorithm includes a logistic regression algorithm.
The foregoing is merely a preferred embodiment of the invention, it being understood that the invention is not limited to the forms disclosed herein, but is not to be construed as excluding other embodiments, and: various other combinations, modifications, and environments and can be made within the scope of the concepts described herein, as indicated by the above teachings or by the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A prostate margin positive prediction method fused with 3D feature calculation is characterized by comprising the following steps:
step S1, preprocessing a prostate medical image;
s2, training an image generation model;
step S3, generating prostate 3D data;
and S4, training a 3D deep learning model.
2. The method of claim 1, wherein the prostate medical image pre-processing further comprises the sub-steps of:
s11, screening out a prostate protrusion coronal bitmap from the complete prostate medical layered image;
step S12, selecting three images of the forefront, the last and the middle from the prostate protrusion coronal bitmap;
step S13, arranging the prostate protrusion coronal bitmap according to a horizontal position, and selecting three images of the forefront, the last and the middle;
step S14, arranging the prostate protrusion coronal bitmap according to sagittal positions, and selecting three pictures of the leftmost picture, the rightmost picture and the middle picture;
and S15, respectively collecting the pictures of the three angles for converting the medical image into a data set suitable for deep neural network learning.
3. The method for positive prediction of prostate margin by fusing 3D feature calculation according to claim 1, wherein the image generation model training inputs the data set into the image generation network, training the image generation model capable of generating intermediate medical image, comprising the following sub-steps:
s21, compiling an Encoder structure;
step S22, determining the vector size of the middle layer, and setting a default value as an array with the length of 100;
s23, compiling a Decoder structure;
step S24, constructing a loss function of training;
and S25, taking the data set as a training set to train the network, selecting proper iteration times, setting a default value as 100, and obtaining generated network parameters after training is finished.
4. The method for prostate margin positive prediction with fusion of 3D feature calculation according to claim 3, wherein the Encoder structure is composed of four down-sampling layers and one convolution layer; the down-sampling layer consists of three convolution layers and a pooling layer;
the Decoder structure consists of four upper sampling layers; the upsampling layer consists of three convolutional layers and an upsampling function.
5. The method of claim 3, wherein the loss function is:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
in order to be the variance of the added noise,
Figure DEST_PATH_IMAGE006
for the original output vector of the Encode,
Figure DEST_PATH_IMAGE008
is the middle layer vector dimension.
6. The method for positive prediction of prostate margin by fusing 3D feature calculation according to claim 1, wherein the generating prostate 3D data generates a series of medical images in the middle of two medical images by inputting the two medical images into a trained image generation network model, and a complete medical image available for 3D feature analysis based on the series of medical images comprises the following sub-steps:
s31, intercepting an Encoder part of the generated network, and inputting two adjacent medical images into the Encoder to obtain a middle layer vector;
s32, averaging the intermediate layer vectors of the two adjacent medical images to obtain an intermediate layer vector of a to-be-generated image;
s33, inputting the intermediate layer vector of the generated picture into a Decoder to generate an intermediate image;
s34, repeating the steps until enough intermediate images are generated;
the default number of sufficient intermediate images is 64 in total.
7. The method for predicting prostate margin positivity by fusing 3D feature calculation according to claim 1, wherein the training 3D deep learning model takes a series of medical images of the complete medical image group which can be used for 3D feature analysis and other features of the patient as input, and takes the margin positivity probability as output, and trains a 3D deep learning neural network, comprising the following sub-steps:
s41, constructing a 3D deep learning network and outputting a feature vector;
step S42, combining the output feature vector in the step S41 and other relevant features of the patient into a new feature vector;
and S43, taking the feature vector obtained in the S42 as input, taking the incisal edge positive result of the patient as output, and training in a machine learning algorithm to finally obtain a 3D deep learning neural network and machine learning parameters.
8. The method according to claim 7, wherein the 3D deep learning network has an input size of 3 x 64 channels and an output size of 50 unit length.
9. The method of claim 7, wherein the other relevant characteristics of the patient include age, blood pressure and other data strongly correlated with the diagnosis result.
10. The method of claim 7, wherein the machine learning algorithm comprises a logistic regression algorithm.
CN202310029875.3A 2023-01-09 2023-01-09 Prostate margin positive prediction method fusing 3D feature calculation Pending CN115830005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310029875.3A CN115830005A (en) 2023-01-09 2023-01-09 Prostate margin positive prediction method fusing 3D feature calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310029875.3A CN115830005A (en) 2023-01-09 2023-01-09 Prostate margin positive prediction method fusing 3D feature calculation

Publications (1)

Publication Number Publication Date
CN115830005A true CN115830005A (en) 2023-03-21

Family

ID=85520482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310029875.3A Pending CN115830005A (en) 2023-01-09 2023-01-09 Prostate margin positive prediction method fusing 3D feature calculation

Country Status (1)

Country Link
CN (1) CN115830005A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118098520A (en) * 2024-04-23 2024-05-28 四川省肿瘤医院 Method for constructing positive predictive model of postoperative surgical incisal margin of esophageal squamous cell carcinoma patient

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118098520A (en) * 2024-04-23 2024-05-28 四川省肿瘤医院 Method for constructing positive predictive model of postoperative surgical incisal margin of esophageal squamous cell carcinoma patient
CN118098520B (en) * 2024-04-23 2024-06-21 四川省肿瘤医院 Method for constructing positive predictive model of postoperative surgical incisal margin of esophageal squamous cell carcinoma patient

Similar Documents

Publication Publication Date Title
CN110544264B (en) Temporal bone key anatomical structure small target segmentation method based on 3D deep supervision mechanism
Tang et al. Xlsor: A robust and accurate lung segmentor on chest x-rays using criss-cross attention and customized radiorealistic abnormalities generation
CN109598727B (en) CT image lung parenchyma three-dimensional semantic segmentation method based on deep neural network
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN109886986A (en) A kind of skin lens image dividing method based on multiple-limb convolutional neural networks
CN113724880A (en) Abnormal brain connection prediction system, method and device and readable storage medium
CN108268870A (en) Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN110491480A (en) A kind of medical image processing method, device, electromedical equipment and storage medium
CN112348769A (en) Intelligent kidney tumor segmentation method and device in CT (computed tomography) image based on U-Net depth network model
CN114612408B (en) Cardiac image processing method based on federal deep learning
Kong et al. Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network
CN111242233B (en) Alzheimer disease classification method based on fusion network
CN112508953B (en) Meningioma rapid segmentation qualitative method based on deep neural network
CN114548265B (en) Crop leaf disease image generation model training method, crop leaf disease identification method, electronic equipment and storage medium
Wazir et al. HistoSeg: Quick attention with multi-loss function for multi-structure segmentation in digital histology images
JP7312510B1 (en) Whole-slide pathological image classification system and construction method considering tumor microenvironment
CN115830005A (en) Prostate margin positive prediction method fusing 3D feature calculation
CN113160229A (en) Pancreas segmentation method and device based on hierarchical supervision cascade pyramid network
CN113450363B (en) Meta-learning cell nucleus segmentation system and method based on label correction
CN114494289A (en) Pancreatic tumor image segmentation processing method based on local linear embedded interpolation neural network
CN113902702A (en) Pulmonary nodule benign and malignant auxiliary diagnosis system based on computed tomography
CN114549394A (en) Deep learning-based tumor focus region semantic segmentation method and system
CN117611601A (en) Text-assisted semi-supervised 3D medical image segmentation method
CN111430024B (en) Data decision method and system for classifying disease degree
CN111755131A (en) COVID-19 early screening and severity degree evaluation method and system based on attention guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination