CN116580037B - Nasopharyngeal carcinoma image segmentation method and system based on deep learning - Google Patents

Nasopharyngeal carcinoma image segmentation method and system based on deep learning Download PDF

Info

Publication number
CN116580037B
CN116580037B CN202310835099.6A CN202310835099A CN116580037B CN 116580037 B CN116580037 B CN 116580037B CN 202310835099 A CN202310835099 A CN 202310835099A CN 116580037 B CN116580037 B CN 116580037B
Authority
CN
China
Prior art keywords
deep learning
nasopharyngeal carcinoma
image
sample set
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310835099.6A
Other languages
Chinese (zh)
Other versions
CN116580037A (en
Inventor
高晓葳
王珊珊
杜俊尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SECOND HOSPITAL OF TIANJIN MEDICAL UNIVERSITY
Original Assignee
SECOND HOSPITAL OF TIANJIN MEDICAL UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SECOND HOSPITAL OF TIANJIN MEDICAL UNIVERSITY filed Critical SECOND HOSPITAL OF TIANJIN MEDICAL UNIVERSITY
Priority to CN202310835099.6A priority Critical patent/CN116580037B/en
Publication of CN116580037A publication Critical patent/CN116580037A/en
Application granted granted Critical
Publication of CN116580037B publication Critical patent/CN116580037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application relates to the technical field of image segmentation, in particular to a nasopharyngeal carcinoma image segmentation method and system based on deep learning; according to the application, the nasopharyngeal carcinoma focus is divided into two types of bone tissues and muscle adipose tissues by the characteristic of being relatively complex to the area of the nasopharyngeal carcinoma focus, and the focus is respectively identified by adopting two deep learning models, so that the accuracy of model identification is improved, and the accuracy of model segmentation is further improved; in addition, according to the problem that the identification accuracy is not high when the nasopharyngeal carcinoma focus occurs on the muscle adipose tissue, the HU value larger than a certain threshold value is randomly deleted, so that the influence of bones in the training process is weakened, the model is enabled to pay more attention to the messages such as muscle, fat and the like, the identification capability of the model on the focus of the muscle and fat region is improved, the identification accuracy of the model is improved, and the segmentation accuracy of the model is further improved.

Description

Nasopharyngeal carcinoma image segmentation method and system based on deep learning
Technical Field
The application relates to the technical field of image segmentation, in particular to a nasopharyngeal carcinoma image segmentation method and system based on deep learning.
Background
Nasopharyngeal carcinoma (NPC) is one of the most common malignant tumors of the head and neck in China, and the early detection of the nasopharyngeal carcinoma is difficult because the early clinical manifestation of the nasopharyngeal carcinoma is not specific; imaging and endoscopy are common inspection means for nasopharyngeal carcinoma, however, are influenced by subjective factors such as clinician experience and objective factors such as image picture resolution, and have higher inconsistent diagnosis results in the traditional nasopharyngeal carcinoma diagnosis and treatment process. In addition, the clinical efficiency is low due to the large number of repeated works in the traditional diagnosis and treatment mode. Therefore, searching for a more stable, objective and efficient means of diagnosing and treating nasopharyngeal carcinoma is of great importance to clinical work. In recent years, with the increasing application of artificial intelligence, particularly deep learning, to computer vision, imaging and endoscopic image-based models have made great progress in diagnosis and treatment of various cancers such as lung cancer, skin cancer, colon cancer, and the like. The model based on deep learning can autonomously select the most suitable feature training from the original data, and has exciting performance in aspects of image recognition, segmentation, risk prediction, curative effect prediction and the like.
CT is an abbreviation for computed tomography, and it is most common in medical clinical practice to use X-rays as radiation sources to create tomographic images, i.e., X-ray CT. In short, the imaging principle of CT is described below, and since different tissues of the human body have different absorption capacities for X-rays, different attenuation values are generated when X-rays pass through the different tissues of the human body, and the X-rays can be used for reconstructing and obtaining images according to attenuation information. The CT imaging technology has the main advantages of high density resolution, clear anatomical relation of the section, good lesion detail display, especially no fine calcification, liquefaction, necrosis and other structures displayed by the flat slice, and is very helpful for qualitative diagnosis in practical clinical application. And compared with an MRI imaging method, the CT imaging technology has higher imaging speed and higher imaging quality than an ultrasonic image, so that CT imaging is always the main imaging method of nasopharyngeal carcinoma.
In the prior art, there is a technical scheme of image segmentation of CT images by using a deep learning model, for example, chinese patent (CN 111798462 a) proposes a multi-scale integrated model based on a combination of a 2.5-dimensional convolutional neural network and an attention mechanism. When the CT image segmentation method is used for segmenting a target area of the CT image, the CT image segmentation method has stronger characteristic learning capability on a large-distance image, a target segmentation area is focused more in the segmentation process, so that a better segmentation effect is obtained, segmentation precision is improved by integrating models under a plurality of scales, and uncertainty evaluation of a segmentation result is provided according to the model integration result, so that doctor decision is better assisted;
however, since the nasopharyngeal carcinoma focus is located in the head area, the area is complex, the skull, muscle, fat, blood and the like are all gathered in the area, the boundaries between the tissues are not obvious, and the segmentation accuracy is not high due to the fact that the above complex situation is easily affected when the nasopharyngeal carcinoma focus is segmented by adopting the scheme, so that a method and a system for segmenting the nasopharyngeal carcinoma CT image for improving the segmentation accuracy are urgently needed in the prior art.
Disclosure of Invention
Aiming at the defects of the technical scheme, the application provides a nasopharyngeal carcinoma image segmentation method and a nasopharyngeal carcinoma image segmentation system based on deep learning, which are used for improving the segmentation efficiency of a nasopharyngeal carcinoma region of a CT image.
In order to achieve the above object, according to one aspect of the present application, there is provided a nasopharyngeal carcinoma image segmentation method based on deep learning, comprising the steps of:
s101: collecting CT images of nasopharyngeal carcinoma patients of a hospital system as a sample set;
s102: preprocessing the sample set;
s103: training a first deep learning model according to the sample set;
s104: inputting the CT image to be identified into the first deep learning model to obtain a first preliminary identification result;
s105: judging whether the first preliminary identification result is in a bone tissue area or not; if not, the next step is entered, and if so, the process proceeds to S109;
s106: processing the sample set to obtain a new sample set, and training a second deep learning model according to the new sample set;
s107: inputting the CT image to be identified into the second deep learning model to obtain a second preliminary identification result;
s108: judging whether the second preliminary identification result is in a muscle and adipose tissue region; if yes, entering the next step, and if not, sending the CT image to be identified to a doctor for comprehensive judgment;
s109: and image segmentation is carried out according to the first primary identification result or the second primary identification result.
Preferably, in the embodiment, the CT image in the electronic medical record of the diagnosis of nasopharyngeal carcinoma in the Tianjin 2000-2020 is used as the sample set for segmentation of the nasopharyngeal carcinoma image; wherein 58 cases of T1, 35 cases of T2, 135 cases of T3 and 56 cases of T4 are included; the CT image is scanned by utilizing Philips large aperture CT of the hospital, the patient is fixed by a vacuum pad in a supine position, the bulb tube voltage is set to 120kV, the X-ray tube current is set to 25mA, and the CT image of the patient before the first treatment is acquired;
it should be emphasized that the CT image should include a nasopharyngeal carcinoma focus area manually outlined by a professional radiotherapeutic doctor;
still further, nasopharyngeal carcinoma CT images from medical image conference (MICCAI) 2019 challenge are also added as sample sets;
preferably, the preprocessing comprises converting pixel values of the CT image into HU values;
the preprocessing further comprises removing isolated values in the CT image;
preferably, the removing isolated values in the CT image specifically includes: counting the average value of HU values in the sample set, when the HU value of a pixel point of a certain point in the CT image is smaller than 1% of the split HU value, assigning the 1% of the split HU value to the HU value of the pixel point, and when the HU value of the pixel point of the point is larger than 99% of the split HU value, assigning the 99% of the split value to the pixel point;
preferably, the step S103 specifically includes:
s1031: establishing an initial first deep learning model;
preferably, the initial first deep learning model is a convolutional neural network model; the convolutional neural network model comprises an input layer, a convolutional layer, an activation layer, a pooling layer and a full-connection layer;
s1032: dividing the sample set into a training set and a verification set according to the proportion of 8:2;
preferably, training the convolutional neural network model by adopting a training set, and verifying whether an error function of the convolutional neural network model is converged by a verification set;
s1033: obtaining the first deep learning model;
preferably, the first preliminary identification result is a nasopharyngeal carcinoma focus area;
preferably, the S106 includes:
s1061: randomly deleting 25% of pixels with HU values larger than a preset threshold value in the sample set to obtain a new sample set;
it is worth emphasizing that the application adopts random deletion of bone tissue, thereby highlighting the information of muscle and fat and improving the identification precision of nasopharyngeal carcinoma focus of the model in the muscle and fat area;
s1062: establishing an initial second deep learning model;
preferably, the second deep learning model is the same as the first deep learning model, and is a convolutional neural network model; the convolutional neural network model comprises an input layer, a convolutional layer, an activation layer, a pooling layer and a full-connection layer;
s1063: training the convolutional neural network model by adopting the new sample set;
preferably, training the convolutional neural network model by adopting a new training set, and verifying whether an error function of the convolutional neural network model is converged or not through a new verification set;
s1064: obtaining the second deep learning model;
preferably, the second preliminary identification result is a nasopharyngeal carcinoma focus area;
according to another aspect of the application, a nasopharyngeal carcinoma image segmentation system based on deep learning is further provided, and the segmentation system adopts the nasopharyngeal carcinoma image segmentation method based on deep learning.
According to another aspect of the present application, there is also provided a computer-readable storage medium having stored thereon a data processing program, the data processing program being executed by a processor to perform the above-described method for segmentation of nasopharyngeal carcinoma images based on deep learning.
Based on the technical scheme, the nasopharyngeal carcinoma image segmentation method and system based on deep learning provided by the application have the following technical effects:
according to the application, the nasopharyngeal carcinoma focus is divided into two types of bone tissues and muscle adipose tissues by the characteristic of being relatively complex to the area of the nasopharyngeal carcinoma focus, and the focus is respectively identified by adopting two deep learning models, so that the accuracy of model identification is improved, and the accuracy of model segmentation is further improved;
in addition, according to the problem that the identification accuracy is not high when the nasopharyngeal carcinoma focus occurs on the muscle adipose tissue, the HU value larger than a certain threshold value is randomly deleted, so that the influence of bones in the training process is weakened, the model is enabled to pay more attention to the messages such as muscle, fat and the like, the identification capability of the model on the focus of the muscle and fat region is improved, the identification accuracy of the model is improved, and the segmentation accuracy of the model is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a nasopharyngeal carcinoma image segmentation method based on deep learning according to an embodiment of the present application;
FIG. 2 is a flowchart of training a first deep learning model according to the sample set provided in an embodiment of the present application;
fig. 3 is a flowchart of training a second deep learning model according to the sample set according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The concept of the present application will be described with reference to the accompanying drawings. It should be noted that the following descriptions of the concepts are only for making the content of the present application easier to understand, and do not represent a limitation on the protection scope of the present application.
In order to achieve the above object, in an example of the present embodiment, as shown in fig. 1, a method for segmenting a nasopharyngeal carcinoma image based on deep learning is provided, including the following steps:
s101: collecting CT images of nasopharyngeal carcinoma patients of a hospital system as a sample set;
specifically, in the embodiment, the CT image in the electronic medical record of the diagnosis of nasopharyngeal carcinoma in the Tianjin city in 2000-2020 is used as the sample set for segmentation of the nasopharyngeal carcinoma image; wherein 58 cases of T1, 35 cases of T2, 135 cases of T3 and 56 cases of T4 are included; the CT image is scanned by utilizing Philips large aperture CT of the hospital, the patient is fixed by a vacuum pad in a supine position, the bulb tube voltage is set to 120kV, the X-ray tube current is set to 25mA, and the CT image of the patient before the first treatment is acquired;
it should be emphasized that the CT image should include a nasopharyngeal carcinoma focus area manually outlined by a professional radiotherapeutic doctor;
further, since the present embodiment uses CT images as sample sets for training of subsequent models, nasopharyngeal carcinoma CT images from medical image conference (MICCAI) 2019 challenge are also added as sample sets; on one hand, in order to obtain the sample set in a public way and to draw a nasopharyngeal carcinoma focus area more accurately, on the other hand, the embodiment adds CT images from different sources as the sample set, so as to avoid the influence of medical history of one area or data acquired by one instrument on the adaptability of the result;
s102: preprocessing the sample set;
in particular, the preprocessing comprises converting pixel values of the CT image into HU values; specifically converting formulas to the prior art, not discussed in detail herein;
then removing the isolated value in the CT image so as to eliminate the interference of the isolated value on the model when training the deep learning model;
specifically, the removing the isolated value in the CT image specifically includes: counting the average value of HU values in the sample set, when a certain point pixel value in the CT image is smaller than 1% of the split HU value, assigning the 1% of the split HU value to the point pixel value, and when the point pixel value is larger than 99% of the split HU value, assigning the 99% of the split value to the point pixel value;
s103: training a first deep learning model according to the sample set;
specifically, as shown in fig. 2, the step S103 specifically includes:
s1031: establishing an initial first deep learning model;
specifically, the initial first deep learning model is a convolutional neural network model; the convolutional neural network model comprises an input layer, a convolutional layer, an activation layer, a pooling layer and a full-connection layer;
s1032: dividing the sample set into a training set and a verification set according to the proportion of 8:2;
specifically, training the convolutional neural network model by adopting a training set, and verifying whether an error function of the convolutional neural network model is converged or not through a verification set;
s1033: obtaining the first deep learning model;
s104: inputting the CT image to be identified into the first deep learning model to obtain a first preliminary identification result;
specifically, the first preliminary identification result is a nasopharyngeal carcinoma focus area;
in fact, the first deep learning model has been proved by a large number of training sets and verification sets, so that the first deep learning model has relatively accurate nasopharyngeal carcinoma focus recognition capability, but because the nasopharyngeal carcinoma focus is located in the head area, the area is relatively complex, the skull, the muscle, the fat, the blood and the like are all gathered in the area, and the mutual intersection is relatively obvious, if the nasopharyngeal carcinoma focus area comprises the muscle, the fat and other areas, the recognition effect of the deep learning model is relatively poor generally, and aiming at the characteristic, the embodiment sets the follow-up steps for continuously recognizing CT images;
s105: judging whether the first preliminary identification result is in a bone tissue area or not; if not, the next step is entered, and if so, the process proceeds to S109;
s106: processing the sample set to obtain a new sample set, and training a second deep learning model according to the new sample set;
specifically, as shown in fig. 3, the step S106 includes:
s1061: randomly deleting 25% of pixels with HU values larger than a preset threshold value in the sample set to obtain a new sample set;
in the step, the HU value larger than a certain threshold value is deleted randomly, so that the influence of bones in the training process is weakened, the model is enabled to pay more attention to messages such as muscles and fat, and the identification capability of the model to focuses of the muscles and fat areas is improved;
it is worth emphasizing that the embodiment adopts random deletion of bone tissue, so that the information of muscles and fat is highlighted, and the identification accuracy of nasopharyngeal carcinoma focus of the model in the muscle and fat area is improved;
in fact, the sample set is screened to screen out samples of the focus area in the muscle and fat area for model training, and the effect of improving the identification accuracy of the nasopharyngeal carcinoma focus of the model in the muscle and fat area can be achieved;
s1062: establishing an initial second deep learning model;
specifically, the second deep learning model is the same as the first deep learning model and is a convolutional neural network model; the convolutional neural network model comprises an input layer, a convolutional layer, an activation layer, a pooling layer and a full-connection layer;
s1063: training the convolutional neural network model by adopting the sample set;
specifically, training the convolutional neural network model by adopting a training set, and verifying whether an error function of the convolutional neural network model is converged or not through a verification set;
s1064: obtaining the second deep learning model;
s107: inputting the CT image to be identified into the second deep learning model to obtain a second preliminary identification result;
specifically, the second preliminary identification result is a nasopharyngeal carcinoma focus area;
s108: judging whether the second preliminary identification result is in a muscle and adipose tissue region; if yes, entering the next step, and if yes, sending the CT image to be identified to a doctor for comprehensive judgment;
at this time, most of the nasopharyngeal carcinoma focus can be accurately identified through the first deep learning model and the second deep learning model, but for some extremely complex nasopharyngeal carcinoma focus images or medical records in which focus areas are diffused in bone tissues and muscle adipose tissues, the identification precision of the two models is not high, so that a doctor is required to comprehensively judge.
S109: and image segmentation is carried out according to the first primary identification result or the second primary identification result.
In a second embodiment, in an example of the present embodiment, a nasopharyngeal carcinoma image segmentation system based on deep learning is provided, where the segmentation system adopts a nasopharyngeal carcinoma image segmentation method based on deep learning in the first embodiment.
In a third embodiment, the present embodiment includes a computer-readable storage medium having a data processing program stored thereon, the data processing program being executed by a processor to perform the method for segmentation of nasopharyngeal carcinoma images based on deep learning of the first embodiment.
It will be apparent to one of ordinary skill in the art that embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Including but not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer, and the like. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The description herein is with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
The above examples and/or embodiments are merely for illustrating the preferred embodiments and/or implementations of the present technology, and are not intended to limit the embodiments and implementations of the present technology in any way, and any person skilled in the art should be able to make some changes or modifications to the embodiments and/or implementations without departing from the scope of the technical means disclosed in the present disclosure, and it should be considered that the embodiments and implementations are substantially the same as the present technology.

Claims (8)

1. The nasopharyngeal carcinoma image segmentation method based on deep learning is characterized by comprising the following steps of:
s101: collecting CT images of nasopharyngeal carcinoma patients of a hospital system as a sample set;
s102: preprocessing the sample set;
s103: training a first deep learning model according to the sample set;
s104: inputting the CT image to be identified into the first deep learning model to obtain a first preliminary identification result;
s105: judging whether the first preliminary identification result is in a bone tissue area or not; if not, the next step is entered, and if so, the process proceeds to S109;
s106: processing the sample set to obtain a new sample set, and training a second deep learning model according to the new sample set;
the S106 includes:
s1061: randomly deleting 25% of pixels with HU values larger than a preset threshold value in the sample set to obtain a new sample set; the HU value larger than a certain threshold value is randomly deleted, so that the influence of bones in the training process is weakened, and the model is enabled to pay more attention to muscle and fat information;
s1062: establishing an initial second deep learning model;
s1063: training the second deep learning model by adopting the new sample set;
s1064: obtaining the second deep learning model;
s107: inputting the CT image to be identified into the second deep learning model to obtain a second preliminary identification result;
s108: judging whether the second preliminary identification result is in a muscle and adipose tissue region; if yes, entering S110, if not, sending the CT image to be identified to a doctor for comprehensive judgment;
s109: image segmentation is carried out according to the first primary identification result;
s110: and image segmentation is carried out according to the second primary identification result.
2. The method for segmentation of a nasopharyngeal carcinoma image based on deep learning according to claim 1, wherein in S101, a CT image in S101 includes a nasopharyngeal carcinoma focus area manually delineated by a professional radiotherapy doctor.
3. The method for segmentation of nasopharyngeal carcinoma image based on deep learning according to claim 2, wherein in S101, CT images in electronic medical records of Tianjin 2000-2020, which are diagnosed as nasopharyngeal carcinoma, are used as a sample set for segmentation of nasopharyngeal carcinoma images.
4. The method according to claim 1, wherein in S102, the preprocessing includes converting pixel values of the CT image into HU values and removing outliers from the CT image.
5. The method for segmentation of nasopharyngeal carcinoma image based on deep learning according to claim 1, wherein said step S103 specifically comprises:
s1031: establishing an initial first deep learning model;
s1032: dividing the sample set into a training set and a verification set according to the proportion of 8:2;
s1033: and obtaining the first deep learning model.
6. The method for segmentation of nasopharyngeal carcinoma images based on deep learning according to claim 1, wherein said second deep learning model is the same as said first deep learning model, and is a convolutional neural network model, said convolutional neural network model comprising an input layer, a convolutional layer, an activation layer, a pooling layer and a full-connection layer.
7. The method for segmenting a nasopharyngeal carcinoma image based on deep learning according to claim 1, wherein said first preliminary identification result and said second preliminary identification result are nasopharyngeal carcinoma focus areas.
8. A deep learning-based nasopharyngeal carcinoma image segmentation system employing a deep learning-based nasopharyngeal carcinoma image segmentation method as set forth in any one of claims 1-7.
CN202310835099.6A 2023-07-10 2023-07-10 Nasopharyngeal carcinoma image segmentation method and system based on deep learning Active CN116580037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310835099.6A CN116580037B (en) 2023-07-10 2023-07-10 Nasopharyngeal carcinoma image segmentation method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310835099.6A CN116580037B (en) 2023-07-10 2023-07-10 Nasopharyngeal carcinoma image segmentation method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN116580037A CN116580037A (en) 2023-08-11
CN116580037B true CN116580037B (en) 2023-10-13

Family

ID=87539982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310835099.6A Active CN116580037B (en) 2023-07-10 2023-07-10 Nasopharyngeal carcinoma image segmentation method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN116580037B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115156B (en) * 2023-10-23 2024-01-05 天津医科大学第二医院 Nasopharyngeal carcinoma image processing method and system based on dual-model segmentation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389584A (en) * 2018-09-17 2019-02-26 成都信息工程大学 Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN113171121A (en) * 2021-04-20 2021-07-27 吉林大学 Multi-physical-field-coupling-based skeletal muscle system disease diagnosis device and method
CN113330485A (en) * 2019-01-08 2021-08-31 诺沃库勒有限责任公司 Assessing the quality of segmenting an image into different types of tissue for planning a treatment using a tumor treatment field (TTField)
CN115210772A (en) * 2020-01-03 2022-10-18 佩治人工智能公司 System and method for processing electronic images for universal disease detection
WO2023020198A1 (en) * 2021-08-16 2023-02-23 腾讯科技(深圳)有限公司 Image processing method and apparatus for medical image, and device and storage medium
CN116228787A (en) * 2022-09-08 2023-06-06 深圳市联影高端医疗装备创新研究院 Image sketching method, device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017055412A1 (en) * 2015-09-30 2017-04-06 Siemens Healthcare Gmbh Method and system for classification of endoscopic images using deep decision networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389584A (en) * 2018-09-17 2019-02-26 成都信息工程大学 Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN113330485A (en) * 2019-01-08 2021-08-31 诺沃库勒有限责任公司 Assessing the quality of segmenting an image into different types of tissue for planning a treatment using a tumor treatment field (TTField)
CN115210772A (en) * 2020-01-03 2022-10-18 佩治人工智能公司 System and method for processing electronic images for universal disease detection
CN113171121A (en) * 2021-04-20 2021-07-27 吉林大学 Multi-physical-field-coupling-based skeletal muscle system disease diagnosis device and method
WO2023020198A1 (en) * 2021-08-16 2023-02-23 腾讯科技(深圳)有限公司 Image processing method and apparatus for medical image, and device and storage medium
CN116228787A (en) * 2022-09-08 2023-06-06 深圳市联影高端医疗装备创新研究院 Image sketching method, device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MSU-Net: Multi-scale Sensitive U-Net based on pixel-edge-region level collaborative loss for nasopharyngeal MRI segmentation;Yuanquan Hao et al.;Computers in Biology and Medicine;第159卷;全文 *
深度学习在鼻咽癌影像及图像分析中应用的研究进展;杨晨第;癌症;第41卷(第9期);全文 *

Also Published As

Publication number Publication date
CN116580037A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN109493325B (en) Tumor heterogeneity analysis system based on CT images
US7796795B2 (en) System and method for computer aided detection and diagnosis from multiple energy images
US6728334B1 (en) Automatic detection of pulmonary nodules on volumetric computed tomography images using a local density maximum algorithm
CN107545584A (en) The method, apparatus and its system of area-of-interest are positioned in medical image
JP2004000609A (en) Computer assisted diagnosis by multiple energy image
Fernandes et al. A novel fusion approach for early lung cancer detection using computer aided diagnosis techniques
CN109255354B (en) Medical CT-oriented computer image processing method and device
US11478163B2 (en) Image processing and emphysema threshold determination
CN116580037B (en) Nasopharyngeal carcinoma image segmentation method and system based on deep learning
CN111340825A (en) Method and system for generating mediastinal lymph node segmentation model
EP3971830A1 (en) Pneumonia sign segmentation method and apparatus, medium and electronic device
JP2015066311A (en) Image processor, image processing method, program for controlling image processor, and recording medium
JP2002301051A (en) Tomographic segmentation
Midya et al. Computerized diagnosis of liver tumors from CT scans using a deep neural network approach
US11935234B2 (en) Method for detecting abnormality, non-transitory computer-readable recording medium storing program for detecting abnormality, abnormality detection apparatus, server apparatus, and method for processing information
Fan et al. Automatic segmentation of pulmonary nodules by using dynamic 3D cross-correlation for interactive CAD systems
Armya et al. Medical images segmentation based on unsupervised algorithms: a review
US20050002548A1 (en) Automatic detection of growing nodules
CN114742753A (en) Image evaluation method and device based on neural network
KR102136107B1 (en) Apparatus and method for alignment of bone suppressed chest x-ray image
JP2001216517A (en) Object recognition method
Chen et al. Thyroid nodule detection using attenuation value based on non-enhancement CT images
Giordano et al. Automatic skeletal bone age assessment by integrating EMROI and CROI processing
CN113658172B (en) Image processing method and device, computer readable storage medium and electronic device
Oyovwe et al. An enhanced Convolutional Neural Network (CNN) model for the detection of lung cancer using X-Ray image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant