CN113469229A - Method and device for automatically labeling breast cancer focus based on deep learning - Google Patents

Method and device for automatically labeling breast cancer focus based on deep learning Download PDF

Info

Publication number
CN113469229A
CN113469229A CN202110682622.7A CN202110682622A CN113469229A CN 113469229 A CN113469229 A CN 113469229A CN 202110682622 A CN202110682622 A CN 202110682622A CN 113469229 A CN113469229 A CN 113469229A
Authority
CN
China
Prior art keywords
image
deep learning
training
breast cancer
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110682622.7A
Other languages
Chinese (zh)
Inventor
赵慧英
林斯颖
杨跃东
吴卓
刘海晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen Memorial Hospital Sun Yat Sen University
Original Assignee
Sun Yat Sen Memorial Hospital Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen Memorial Hospital Sun Yat Sen University filed Critical Sun Yat Sen Memorial Hospital Sun Yat Sen University
Priority to CN202110682622.7A priority Critical patent/CN113469229A/en
Publication of CN113469229A publication Critical patent/CN113469229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Abstract

The invention provides a method and a device for automatically labeling a breast cancer focus based on deep learning, wherein the method comprises the following steps: acquiring an MR image of a breast cancer patient; the method comprises the steps of sketching an enhanced image extracted from an MR image to obtain a focus area, and extracting training image data to be processed and test image data to be processed according to the focus area; performing model training on a preset deep learning model according to training image data to be processed; inputting test data to be processed into the deep learning model after training for prediction to obtain an automatic labeling result of the breast cancer focus; and verifying and optimizing the deep learning model by adopting a cross-validation method. According to the invention, the three-dimensional mammary gland MR image is processed by combining two-dimensional convolution and CLSTM, so that the accuracy of breast cancer focus labeling is improved, meanwhile, a cross validation method is adopted to validate a deep learning model so as to improve the authenticity of prediction, and further, a theoretical basis can be provided for clinical individualized accurate diagnosis and treatment of a breast cancer newly-assisted treatment patient.

Description

Method and device for automatically labeling breast cancer focus based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for automatically labeling a breast cancer focus based on deep learning.
Background
At present, deep learning is mainly applied to medical images: medical image reconstruction, medical image synthesis, lesion detection, prediction of new adjuvant therapy effect, identification of benign and malignant tumors, and evaluation of recurrence risk.
The deep learning is used for automatic labeling of the breast cancer focus, and the detection of the volume, the shape and the position of the tumor can assist a doctor to make accurate breast cancer assessment and treatment plan. Automated labeling of breast tumors is considered a challenging task. First, breast tumors vary in size, shape, location, and number within a patient, preventing automatic segmentation. Second, some lesions do not have sharp boundaries, limiting the performance of edge-based segmentation methods alone. Third, many CT scans consist of anisotropic dimensions and vary widely along the z-axis (pixel pitch ranges from 0.45 mm to 6.0 mm), further presenting challenges for automated labeling methods.
In recent years, researchers also use a complete convolution neural network (FCN) and U-Net to realize automatic labeling of breast tumors, and obtain ideal prediction results. However, the depth learning method based on the two-dimensional image ignores the spatial information of the three-dimensional medical image on the z-axis, so that the segmentation precision is limited and the accuracy of the prediction result is not high. In addition, the depth learning method based on the three-dimensional image has higher calculation cost, high consumption of GPU memory is generated, the high memory consumption limits the depth of a network and the field of view of a filter, and the performance and the application of the three-dimensional depth learning method are limited.
Disclosure of Invention
The invention provides a method and a device for automatically labeling a breast cancer focus based on deep learning, and aims to solve the technical problems that the prediction result of the existing method for automatically labeling the breast cancer focus is inaccurate, and the GPU memory consumption is high.
In order to solve the technical problem, the invention provides a deep learning-based breast cancer focus automatic labeling method, which comprises the following steps:
acquiring an MR image of a breast cancer patient;
extracting an enhanced image in the MR image, delineating the edge of the tumor in the enhanced image to obtain a focus area, and extracting training image data to be processed and test image data to be processed according to the focus area;
preprocessing the image of the training image data to be processed, and inputting the preprocessed training image data into a preset deep learning model for model training to obtain model parameters;
preprocessing the image of the test data to be processed, inputting the preprocessed test data into a trained deep learning model for prediction, and obtaining an automatic labeling result of the breast cancer focus;
and verifying the deep learning model by adopting a cross verification method based on the automatic labeling result, and optimizing the deep learning model according to the verification result.
Further, the extracting of the enhanced image in the MR image and the delineation of the tumor edge in the enhanced image are performed to obtain a lesion area, specifically:
and deriving a breast cancer MR image in a DICOM format, carrying out desensitization treatment on the breast cancer MR image, then extracting an enhanced image in the MR image, and delineating the tumor edge in the enhanced image to obtain the lesion area.
Further, the extracting of the training image data to be processed and the test image data to be processed according to the lesion area specifically includes:
labeling and copying the focus region of the MR image in DCE imaging, and extracting the focus region through T1 weighted imaging, T2 weighted imaging and diffusion weighted imaging respectively to obtain the training image data to be processed and the test image data to be processed.
Further, after image preprocessing is performed on the training image data to be processed, the training image data is input into a preset deep learning model for model training, so as to obtain model parameters, specifically:
and performing image preprocessing and data enhancement processing on the training image data to be processed to obtain training image data to be input, and inputting the training image data to be input into the deep learning model for model training to obtain model parameters.
Further, the test data to be processed is input into the deep learning model after training for prediction after image preprocessing, so as to obtain an automatic labeling result of the breast cancer focus, specifically:
and inputting the model parameters into the deep learning model to obtain a deep learning model after training, performing image preprocessing on the test image data to be processed to obtain test image data to be input, and inputting the test image data to be input into the deep learning model after training to obtain an automatic labeling result of the breast cancer focus.
In order to solve the same technical problem, the invention also provides an automatic breast cancer lesion labeling device based on deep learning, which comprises:
the acquisition module is used for acquiring an MR image of a breast cancer patient;
the segmentation module is used for extracting an enhanced image in the MR image, sketching the edge of a tumor in the enhanced image to obtain a focus area, and extracting training image data to be processed and test image data to be processed according to the focus area;
the training module is used for inputting the training image data to be processed into a preset deep learning model after image preprocessing, and performing model training to obtain model parameters;
the prediction module is used for inputting the test data to be processed into the deep learning model after training for prediction after image preprocessing is carried out on the test data to be processed, and an automatic labeling result of the breast cancer focus is obtained;
and the verification module is used for verifying the deep learning model by adopting a cross verification method based on the automatic labeling result and optimizing the deep learning model according to the verification result.
Further, the extracting of the enhanced image in the MR image and the delineation of the tumor edge in the enhanced image are performed to obtain a lesion area, specifically:
and deriving a breast cancer MR image in a DICOM format, carrying out desensitization treatment on the breast cancer MR image, then extracting an enhanced image in the MR image, and delineating the tumor edge in the enhanced image to obtain the lesion area.
Further, the extracting of the training image data to be processed and the test image data to be processed according to the lesion area specifically includes:
labeling and copying the focus region of the MR image in DCE imaging, and extracting the focus region through T1 weighted imaging, T2 weighted imaging and diffusion weighted imaging respectively to obtain the training image data to be processed and the test image data to be processed.
Further, the training module is specifically configured to:
and performing image preprocessing and data enhancement processing on the training image data to be processed to obtain training image data to be input, and inputting the training image data to be input into the deep learning model for model training to obtain model parameters.
Further, the prediction module is specifically configured to:
and inputting the model parameters into the deep learning model to obtain a deep learning model after training, performing image preprocessing on the test image data to be processed to obtain test image data to be input, and inputting the test image data to be input into the deep learning model after training to obtain an automatic labeling result of the breast cancer focus.
Compared with the prior art, the invention has the following beneficial effects:
the embodiment of the invention provides a method and a device for automatically labeling a breast cancer focus based on deep learning, wherein the method comprises the following steps: acquiring an MR image of a breast cancer patient; extracting an enhanced image in the MR image, delineating the edge of the tumor in the enhanced image to obtain a focus area, and extracting training image data to be processed and test image data to be processed according to the focus area; preprocessing the image of the training image data to be processed, and inputting the preprocessed training image data into a preset deep learning model for model training to obtain model parameters; preprocessing the image of the test data to be processed, inputting the preprocessed test data into a trained deep learning model for prediction, and obtaining an automatic labeling result of the breast cancer focus; and verifying the deep learning model by adopting a cross verification method based on the automatic labeling result, and optimizing the deep learning model according to the verification result.
According to the invention, the three-dimensional mammary gland MR image is processed by combining two-dimensional convolution and CLSTM, so that the accuracy of breast cancer focus labeling is improved, and meanwhile, a cross validation method is adopted to validate a deep learning model so as to improve the authenticity of prediction, thereby providing a theoretical basis for clinical individualized accurate diagnosis and treatment of a new breast cancer auxiliary treatment patient.
Drawings
Fig. 1 is a schematic flowchart of a method for automatically labeling a breast cancer lesion based on deep learning according to an embodiment of the present invention;
fig. 2 is another schematic flowchart of a method for automatically labeling a breast cancer lesion based on deep learning according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an apparatus for automatically labeling a breast cancer lesion based on deep learning according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, an embodiment of the present invention provides a method for automatically labeling a breast cancer lesion based on deep learning, including the steps of:
s1, acquiring an MR image of the breast cancer patient.
In an embodiment of the present invention, optionally, the MR image of the breast cancer patient is acquired by inclusion and exclusion criteria, wherein the inclusion and exclusion criteria include case inclusion and exclusion criteria and MR inclusion and exclusion criteria, and the case inclusion and exclusion criteria include case inclusion criteria and case exclusion criteria, and the case inclusion criteria include: 1) all patients undergoing neoadjuvant therapy in the hospital; 2) puncture pathological results and specimens with baselines are obtained, and immunohistochemical detection is carried out; 3) the pathological result and specimen of the operation after the new adjuvant therapy exist, and the immunohistochemical detection is carried out; case exclusion criteria included: 1) non-primary breast malignancy; 2) patients with recurrent breast cancer; 3) patients with multicentric breast cancer of different molecular typing; MR inclusion criteria include: 1) patients who meet the case inclusion criteria (1); 2) there is a hospital MRI original image; MR exclusion criteria included: 1) the MRI original image does not meet the quality control standard; 2) MRI images lack DCE data; 3) a breast tissue marker is placed within the tumor in the MRI image.
As a specific embodiment, the information of the breast MR image acquisition includes: the axial T1WI-DCE sequence shows the blood supply information of the focus; the axial T1WI scout, T2WI scout liposuction and T1WI augmentation delay period sequences show anatomical and spatial information of the lesion; axial DWI sequences and ADC maps show functional information of the lesions.
S2, extracting the enhanced image in the MR image, delineating the edge of the tumor in the enhanced image to obtain a focus area, and extracting training image data to be processed and test image data to be processed according to the focus area.
Further, the extracting of the enhanced image in the MR image and the delineation of the tumor edge in the enhanced image are performed to obtain a lesion area, specifically:
and deriving a breast cancer MR image in a DICOM format, carrying out desensitization treatment on the breast cancer MR image, then extracting an enhanced image in the MR image, and delineating the tumor edge in the enhanced image to obtain the lesion area.
In the embodiment of the invention, a breast cancer MR image in a DICOM format is derived from a PACS system, desensitization processing is carried out on the breast cancer MR image, a 120 th cross section T1W enhanced image of a DCE sequence in the breast cancer MR image is extracted and is led into a 3D-Slicer, a manually marked lesion region is obtained by segmenting the cross section T1W enhanced image in the 3D-Slicer, and manual marking is specifically to realize segmentation of the lesion region by sketching a tumor edge in a cross section T1W enhanced image in the 3D-Slicer.
Further, the extracting of the training image data to be processed and the test image data to be processed according to the lesion area specifically includes:
labeling and copying the focus region of the MR image in DCE imaging, and extracting the focus region through T1 weighted imaging, T2 weighted imaging and diffusion weighted imaging respectively to obtain the training image data to be processed and the test image data to be processed.
S3, preprocessing the image of the training image data to be processed, and inputting the preprocessed training image data into a preset deep learning model for model training to obtain model parameters;
further, step S3 is specifically:
and performing image preprocessing and data enhancement processing on the training image data to be processed to obtain training image data to be input, and inputting the training image data to be input into the deep learning model for model training to obtain model parameters.
And S4, preprocessing the image of the test data to be processed, inputting the preprocessed image into the trained deep learning model for prediction, and obtaining an automatic labeling result of the breast cancer focus.
Further, step S4 is specifically:
and inputting the model parameters into the deep learning model to obtain a deep learning model after training, performing image preprocessing on the test image data to be processed to obtain test image data to be input, and inputting the test image data to be input into the deep learning model after training to obtain an automatic labeling result of the breast cancer focus.
And S5, verifying the deep learning model by adopting a cross-validation method based on the automatic labeling result, and optimizing the deep learning model according to the verification result.
It should be noted that, in the embodiment of the present invention, a deep learning model based on a convolutional neural network and a CLSTM (long and short term memory convolutional network) is constructed, and the deep learning model has a deeper network layer number, which not only can improve the data portrayal capability of the neural network, but also can fit a more complex function, thereby effectively improving the accuracy of prediction. The deep learning model adds the convolution module and the CLSTM to extract the three-dimensional image characteristics, thereby being beneficial to improving the accuracy of the three-dimensional MR image labeling. On the other hand, the light CLSTM network model replaces 3D convolution, so that the influence of a large number of parameters can be avoided, the convergence speed is high, and meanwhile, the parameters and the calculation cost can be effectively reduced, so that the training overhead is effectively reduced.
As a specific implementation manner, the embodiment of the present invention constructs an encoder-decoder architecture based on a deep learning model of a convolutional neural network, where the encoder is configured to extract image features, and the decoder is configured to restore the extracted features to the size of an original image and output a final segmentation result. Optionally, in consideration of predicting breast cancer molecular typing by using three-dimensional data, the embodiment of the invention constructs a three-dimensional convolutional neural network model, and further improves the segmentation effect by using three-dimensional space characteristics. Specifically, a rough liver segmentation result is obtained by using a simple 2D DenseUnet, two-dimensional image features are effectively extracted by using a two-dimensional convolutional neural network model, three-dimensional image features are extracted by using an improved CLSTM (long-short term memory convolutional network), and finally a mixed feature fusion layer is designed to jointly optimize the two-dimensional and three-dimensional features. The applicability of the neural network model in the prospective sample is evaluated by a DICE value or a decision curve, a calibration curve and a nomogram, and the model is adjusted according to the segmentation result, so that the accuracy of the segmentation result of the model can be further improved.
In the embodiment of the invention, the breast cancer focus labeling result is obtained based on deep learning detection, and the three-dimensional breast MR image is processed by combining two-dimensional convolution and CLSTM (long and short term memory convolution network), so that the breast cancer focus labeling accuracy is improved, meanwhile, the deep learning model is verified by adopting cross verification, the predicted authenticity is improved, and further, the theoretical basis can be provided for the clinical individualized accurate diagnosis and treatment of a breast cancer newly-assisted treatment patient.
It should be noted that the above method or flow embodiment is described as a series of acts or combinations for simplicity, but those skilled in the art should understand that the present invention is not limited by the described acts or sequences, as some steps may be performed in other sequences or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are exemplary embodiments and that no single embodiment is necessarily required by the inventive embodiments.
Referring to fig. 3, in order to solve the same technical problem, the present invention further provides an apparatus for automatically labeling a breast cancer lesion based on deep learning, including:
the acquisition module 1 is used for acquiring an MR image of a breast cancer patient;
the segmentation module 2 is used for extracting an enhanced image in the MR image, delineating the edge of the tumor in the enhanced image to obtain a focus area, and extracting training image data to be processed and test image data to be processed according to the focus area;
the training module 3 is used for inputting the training image data to be processed into a preset deep learning model after image preprocessing, and performing model training to obtain model parameters;
the prediction module 4 is used for inputting the test data to be processed into the deep learning model after training for prediction after image preprocessing is carried out on the test data to be processed, and obtaining an automatic labeling result of the breast cancer focus;
and the verification module 5 is used for verifying the deep learning model by adopting a cross verification method based on the automatic labeling result and optimizing the deep learning model according to the verification result.
Further, the extracting of the enhanced image in the MR image and the delineation of the tumor edge in the enhanced image are performed to obtain a lesion area, specifically:
and deriving a breast cancer MR image in a DICOM format, carrying out desensitization treatment on the breast cancer MR image, then extracting an enhanced image in the MR image, and delineating the tumor edge in the enhanced image to obtain the lesion area.
Further, the extracting of the training image data to be processed and the test image data to be processed according to the lesion area specifically includes:
labeling and copying the focus region of the MR image in DCE imaging, and extracting the focus region through T1 weighted imaging, T2 weighted imaging and diffusion weighted imaging respectively to obtain the training image data to be processed and the test image data to be processed.
Further, the training module 3 is specifically configured to:
and performing image preprocessing and data enhancement processing on the training image data to be processed to obtain training image data to be input, and inputting the training image data to be input into the deep learning model for model training to obtain model parameters.
Further, the prediction module 4 is specifically configured to:
and inputting the model parameters into the deep learning model to obtain a deep learning model after training, performing image preprocessing on the test image data to be processed to obtain test image data to be input, and inputting the test image data to be input into the deep learning model after training to obtain an automatic labeling result of the breast cancer focus.
It is to be understood that the above-mentioned apparatus embodiments correspond to the method embodiments of the present invention, and the deep learning-based automatic labeling apparatus for breast cancer lesions provided in the embodiments of the present invention can implement the deep learning-based automatic labeling method for breast cancer lesions provided in any method embodiment of the present invention.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A breast cancer focus automatic labeling method based on deep learning is characterized by comprising the following steps:
acquiring an MR image of a breast cancer patient;
extracting an enhanced image in the MR image, delineating the edge of the tumor in the enhanced image to obtain a focus area, and extracting training image data to be processed and test image data to be processed according to the focus area;
preprocessing the image of the training image data to be processed, and inputting the preprocessed training image data into a preset deep learning model for model training to obtain model parameters;
preprocessing the image of the test data to be processed, inputting the preprocessed test data into a trained deep learning model for prediction, and obtaining an automatic labeling result of the breast cancer focus;
and verifying the deep learning model by adopting a cross verification method based on the automatic labeling result, and optimizing the deep learning model according to the verification result.
2. The method for automatically labeling a breast cancer lesion based on deep learning of claim 1, wherein the step of extracting an enhanced image from the MR image and delineating a tumor edge in the enhanced image to obtain a lesion region comprises:
and deriving a breast cancer MR image in a DICOM format, carrying out desensitization treatment on the breast cancer MR image, then extracting an enhanced image in the MR image, and delineating the tumor edge in the enhanced image to obtain the lesion area.
3. The method for automatically labeling a breast cancer lesion based on deep learning according to claim 1, wherein the extracting of the training image data to be processed and the test image data to be processed according to the lesion region specifically comprises:
labeling and copying the focus region of the MR image in DCE imaging, and extracting the focus region through T1 weighted imaging, T2 weighted imaging and diffusion weighted imaging respectively to obtain the training image data to be processed and the test image data to be processed.
4. The method for automatically labeling a breast cancer lesion based on deep learning according to claim 1, wherein the training image data to be processed is input into a preset deep learning model after image preprocessing, and model training is performed to obtain model parameters, specifically:
and performing image preprocessing and data enhancement processing on the training image data to be processed to obtain training image data to be input, and inputting the training image data to be input into the deep learning model for model training to obtain model parameters.
5. The method for automatically labeling a breast cancer lesion based on deep learning according to claim 1, wherein the test data to be processed is input into a deep learning model after training for prediction after image preprocessing, so as to obtain an automatic labeling result of the breast cancer lesion, and specifically comprises:
and inputting the model parameters into the deep learning model to obtain a deep learning model after training, performing image preprocessing on the test image data to be processed to obtain test image data to be input, and inputting the test image data to be input into the deep learning model after training to obtain an automatic labeling result of the breast cancer focus.
6. The utility model provides a breast cancer focus automatic labeling device based on deep learning which characterized in that includes:
the acquisition module is used for acquiring an MR image of a breast cancer patient;
the segmentation module is used for extracting an enhanced image in the MR image, sketching the edge of a tumor in the enhanced image to obtain a focus area, and extracting training image data to be processed and test image data to be processed according to the focus area;
the training module is used for inputting the training image data to be processed into a preset deep learning model after image preprocessing, and performing model training to obtain model parameters;
the prediction module is used for inputting the test data to be processed into the deep learning model after training for prediction after image preprocessing is carried out on the test data to be processed, and an automatic labeling result of the breast cancer focus is obtained;
and the verification module is used for verifying the deep learning model by adopting a cross verification method based on the automatic labeling result and optimizing the deep learning model according to the verification result.
7. The apparatus for automatically labeling breast cancer lesions based on deep learning of claim 6, wherein the enhanced image in the MR image is extracted, and the tumor edge in the enhanced image is delineated to obtain a lesion region, specifically:
and deriving a breast cancer MR image in a DICOM format, carrying out desensitization treatment on the breast cancer MR image, then extracting an enhanced image in the MR image, and delineating the tumor edge in the enhanced image to obtain the lesion area.
8. The apparatus for automatically labeling a breast cancer lesion based on deep learning of claim 6, wherein the apparatus for extracting training image data to be processed and test image data to be processed according to the lesion region specifically comprises:
labeling and copying the focus region of the MR image in DCE imaging, and extracting the focus region through T1 weighted imaging, T2 weighted imaging and diffusion weighted imaging respectively to obtain the training image data to be processed and the test image data to be processed.
9. The deep learning-based breast cancer lesion automatic labeling device of claim 6, wherein the training module is specifically configured to:
and performing image preprocessing and data enhancement processing on the training image data to be processed to obtain training image data to be input, and inputting the training image data to be input into the deep learning model for model training to obtain model parameters.
10. The apparatus for automatic breast cancer lesion labeling based on deep learning of claim 6, wherein the prediction module is specifically configured to:
and inputting the model parameters into the deep learning model to obtain a deep learning model after training, performing image preprocessing on the test image data to be processed to obtain test image data to be input, and inputting the test image data to be input into the deep learning model after training to obtain an automatic labeling result of the breast cancer focus.
CN202110682622.7A 2021-06-18 2021-06-18 Method and device for automatically labeling breast cancer focus based on deep learning Pending CN113469229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110682622.7A CN113469229A (en) 2021-06-18 2021-06-18 Method and device for automatically labeling breast cancer focus based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110682622.7A CN113469229A (en) 2021-06-18 2021-06-18 Method and device for automatically labeling breast cancer focus based on deep learning

Publications (1)

Publication Number Publication Date
CN113469229A true CN113469229A (en) 2021-10-01

Family

ID=77868883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110682622.7A Pending CN113469229A (en) 2021-06-18 2021-06-18 Method and device for automatically labeling breast cancer focus based on deep learning

Country Status (1)

Country Link
CN (1) CN113469229A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588504A (en) * 2022-10-28 2023-01-10 大连大学附属中山医院 Monitoring management system based on molecular image imaging technology
CN116913479A (en) * 2023-09-13 2023-10-20 西南石油大学 Method and device for determining triple negative breast cancer patient implementing PMRT

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097559A (en) * 2019-04-29 2019-08-06 南京星程智能科技有限公司 Eye fundus image focal area mask method based on deep learning
CN110648304A (en) * 2018-06-11 2020-01-03 上海梵焜医疗器械有限公司 Intelligent auxiliary diagnosis method for handheld hard endoscope
CN111402260A (en) * 2020-02-17 2020-07-10 北京深睿博联科技有限责任公司 Medical image segmentation method, system, terminal and storage medium based on deep learning
CN111739033A (en) * 2020-06-22 2020-10-02 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Method for establishing breast molybdenum target and MR image omics model based on machine learning
CN111915594A (en) * 2020-08-06 2020-11-10 南通大学 End-to-end neural network-based breast cancer focus segmentation method
US20210110532A1 (en) * 2019-10-11 2021-04-15 International Business Machines Corporation Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648304A (en) * 2018-06-11 2020-01-03 上海梵焜医疗器械有限公司 Intelligent auxiliary diagnosis method for handheld hard endoscope
CN110097559A (en) * 2019-04-29 2019-08-06 南京星程智能科技有限公司 Eye fundus image focal area mask method based on deep learning
US20210110532A1 (en) * 2019-10-11 2021-04-15 International Business Machines Corporation Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory
CN111402260A (en) * 2020-02-17 2020-07-10 北京深睿博联科技有限责任公司 Medical image segmentation method, system, terminal and storage medium based on deep learning
CN111739033A (en) * 2020-06-22 2020-10-02 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Method for establishing breast molybdenum target and MR image omics model based on machine learning
CN111915594A (en) * 2020-08-06 2020-11-10 南通大学 End-to-end neural network-based breast cancer focus segmentation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋超 等: "定量DCE-MRI对乳腺非肿块型强化病变良恶性的预测价值", 《放射学实践》, vol. 35, no. 2, pages 190 - 196 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588504A (en) * 2022-10-28 2023-01-10 大连大学附属中山医院 Monitoring management system based on molecular image imaging technology
CN116913479A (en) * 2023-09-13 2023-10-20 西南石油大学 Method and device for determining triple negative breast cancer patient implementing PMRT
CN116913479B (en) * 2023-09-13 2023-12-29 西南石油大学 Method and device for determining triple negative breast cancer patient implementing PMRT

Similar Documents

Publication Publication Date Title
US11227683B2 (en) Methods and systems for characterizing anatomical features in medical images
US11937962B2 (en) Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
Ligtenberg et al. Modality-specific target definition for laryngeal and hypopharyngeal cancer on FDG-PET, CT and MRI
US11219426B2 (en) Method and system for determining irradiation dose
CN113034436B (en) Breast cancer molecular transformation prediction device based on breast MR image histology
WO2020164468A1 (en) Medical image segmentation method, image segmentation method, related device and system
US9603567B2 (en) System and method for evaluation of disease burden
CN109255354B (en) Medical CT-oriented computer image processing method and device
CN113469229A (en) Method and device for automatically labeling breast cancer focus based on deep learning
Park et al. Validation of automatic target volume definition as demonstrated for 11C-choline PET/CT of human prostate cancer using multi-modality fusion techniques
CN105654490A (en) Lesion region extraction method and device based on ultrasonic elastic image
CN113139981A (en) DCE-MRI (direct current imaging-magnetic resonance imaging) breast tumor image segmentation method based on deep neural network
Almeida et al. Quantification of tumor burden in multiple myeloma by atlas-based semi-automatic segmentation of WB-DWI
CN114881914A (en) System and method for determining three-dimensional functional liver segment based on medical image
CN116309551B (en) Method, device, electronic equipment and readable medium for determining focus sampling area
Wang et al. Development and validation of an MRI radiomics-based signature to predict histological grade in patients with invasive breast cancer
US20140094679A1 (en) Systems and methods for performing organ detection
CN110738649A (en) training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images
Breznik et al. Multiple comparison correction methods for whole-body magnetic resonance imaging
CN113838020A (en) Molybdenum target image-based lesion area quantification method
KR20210136571A (en) Clinical decision support system to diagnose breast cancer genetic type by analyzing biopsy breast cancer tissue image
Rodriguez Ruiz Artificial intelligence and tomosynthesis for breast cancer detection
CN117427286B (en) Tumor radiotherapy target area identification method, system and equipment based on energy spectrum CT
US20220405917A1 (en) Combination of features from biopsies and scans to predict prognosis in sclc
CN115880295B (en) Computer-aided tumor ablation navigation system with accurate positioning function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination