CN113035334B - Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma - Google Patents

Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma Download PDF

Info

Publication number
CN113035334B
CN113035334B CN202110562553.6A CN202110562553A CN113035334B CN 113035334 B CN113035334 B CN 113035334B CN 202110562553 A CN202110562553 A CN 202110562553A CN 113035334 B CN113035334 B CN 113035334B
Authority
CN
China
Prior art keywords
cell lymphoma
neural network
image
model
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110562553.6A
Other languages
Chinese (zh)
Other versions
CN113035334A (en
Inventor
章毅
柏森
王强
胡俊杰
宋�莹
余程嵘
李贵元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202110562553.6A priority Critical patent/CN113035334B/en
Publication of CN113035334A publication Critical patent/CN113035334A/en
Application granted granted Critical
Publication of CN113035334B publication Critical patent/CN113035334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of medical images, and discloses an automatic delineation method and device for a radiotherapy target area of nasal cavity NKT cell lymphoma, wherein the method comprises the following steps: acquiring a delineated CT image of the NKT cell lymphoma of the nasal cavity; preprocessing a CT image to obtain a deep learning data set; training the deep learning data set in a deep neural network model to obtain a nasal cavity NKT cell lymphoma delineation model; and preprocessing a clinical CT image, and inputting the preprocessed clinical CT image into a nasal cavity NKT cell lymphoma sketching model for sketching. The invention solves the problem that in the prior art, because individual differences among patients are often large, the prior deformation registration is adopted as a delineation method, and a universal template is often difficult to define to meet the requirements of delineation, thereby causing low delineation precision.

Description

Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma
Technical Field
The invention relates to the technical field of medical images, in particular to a method and a device for automatically delineating a radiotherapy target area of a nasal cavity NKT cell lymphoma.
Background
Nasal NK/T cell lymphoma is a more common malignant lymphoma in asia, latin america, south america, most common in china, japan, korea in the east asia of asia, and is common in young males between 30-50 years of age. Radiotherapy is short for radiotherapy, which damages DNA by the action of ionizing radiation of high-energy rays to promote the growth of tumor cells to stop and slow down. Because nasal cavity NK/T cell lymphoma is sensitive to radiotherapy and refractory to chemotherapy, radiotherapy has become one of the main treatment means of early nasal cavity NK/T cell lymphoma.
Before a patient receives radiotherapy, a Computed Tomography (CT) image is taken, and a radiotherapy doctor outlines a region containing a tumor on the CT image. The tumor area includes the area of the tumor and potentially affected tissue. In order to kill cancer cells and reduce the irradiation dose of normal organs around the tumor during treatment as much as possible, a radiotherapy doctor needs to outline the normal organs around the tumor. Based on the CT images of the patient, the distributed regions of the tumor and normal organs are accurately delineated, a process collectively referred to as target delineation in radiotherapy. The target area delineation is a key step of radiotherapy, the target area delineation range is too large to enable normal organs to be affected by radiation, and the delineation range is too small to realize effective treatment of cancers.
The existing delineation mainly takes manual delineation of a radiotherapy doctor, and because a nasal cavity NK/T cell lymphoma focus is positioned at the head and neck, a plurality of normal organs sensitive to radiation exist around the nasal cavity NK/T cell lymphoma focus. In order to protect the normal organs around the tumor as much as possible when planning the radiotherapy, the radiotherapy physician needs to manually delineate each normal organ and the target region layer by layer based on the CT images of the patient. Therefore, one case usually takes 2-3 hours. The manual drawing process is time-consuming and labor-consuming, the drawing accuracy depends on personal experience and subjective feeling of doctors, and existing researches show that the drawing results of different doctors are greatly different.
At present, software can realize automatic generation of normal organs, and the method used by the software mainly takes deformation registration as a main part, namely, a target area and a normal organ are drawn on CT images of one or more patients in advance as a standard template library, elastic deformation registration is carried out on the CT images to be drawn and the CT images in the template library to obtain a deformation matrix, and finally, the target area and the normal organ corresponding to the CT images to be drawn are generated through the drawing results of the deformation matrix and the template library. Because the position and the form of the focus of a patient are different, no software can automatically draw the target area and normal organs of the nasal cavity NK/T cell lymphoma at present.
Disclosure of Invention
Based on the technical problems, the invention provides an automatic delineation method for a radiotherapy target area of a nasal cavity NKT cell lymphoma, and solves the problem that in the prior art, because individual differences among patients are often large, a universal template is often difficult to define to meet the delineation requirement by adopting the prior deformation registration as a delineation method, so that the delineation precision is low.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method for automatically delineating a radiotherapy target area of nasal cavity NKT cell lymphoma comprises the following steps:
acquiring a delineated CT image of the NKT cell lymphoma of the nasal cavity;
preprocessing a CT image to obtain a deep learning data set;
training the deep learning data set in a deep neural network model to obtain a nasal cavity NKT cell lymphoma delineation model;
and preprocessing a clinical CT image, and inputting the preprocessed clinical CT image into a nasal cavity NKT cell lymphoma sketching model for sketching.
Further, training the deep learning data set in a deep neural network model to obtain a nasal cavity NKT cell lymphoma delineation model comprises:
dividing CT images in the deep learning data set into a training set and a verification set;
inputting the CT image of the training set into a deep neural network model to obtain image characteristic data;
training the image characteristic data according to a residual error network to obtain a training model;
inputting the CT images of the verification set into a training model to obtain evaluation parameters for evaluating the delineation effect;
comparing the evaluation parameters with preset parameters;
and if the evaluation parameter is greater than or equal to the preset parameter, the training model is a nasal cavity NKT cell lymphoma delineation model.
Further, if the evaluation parameter is smaller than the preset parameter, adjusting the parameter of the deep neural network model;
the method for adjusting the parameters of the deep neural network model comprises the following steps:
adopting cross entropy as a loss function of the deep neural network model;
and iteratively updating the loss function through a gradient descent algorithm until the loss function is converged to obtain the optimal deep neural network parameters.
Further, the evaluation parameters are obtained by assembling a similarity metric function DSC.
Further, the residual network employs a resnet50 residual network model.
Further, the deep neural network model adopts an adaptive Dropout optimization algorithm to solve the problem of overfitting existing in the deep neural network model.
Further, the method for solving the problem of overfitting in the deep neural network model by adopting the self-adaptive Dropout optimization algorithm in the deep neural network model comprises the following steps:
constructing an agent module, wherein the agent module comprises a global mean pooling layer, a full-link layer, a nonlinear activation layer and a logistic regression layer;
obtaining the discarding probability of the neurons in the deep neural network model through the agent module;
comparing the discarding probability with a preset importance threshold;
and if the discarding probability is greater than the importance threshold, randomly discarding the neurons according to the discarding probability.
In order to solve the technical problem, the application still discloses an automatic device that delineates in nasal cavity NKT cell lymphoma radiotherapy target area, include:
the image acquisition module is used for acquiring a CT image of delineated nasal cavity NKT cell lymphoma;
the image processing module is used for preprocessing the CT image to obtain a deep learning data set;
the model generation module is used for training the deep learning data set in the deep neural network model to obtain a nasal cavity NKT cell lymphoma delineation model;
and the automatic delineation module is used for preprocessing the clinical CT image and inputting the preprocessed clinical CT image into the nasal cavity NKT cell lymphoma delineation model for delineation.
In order to solve the technical problem, the application further discloses a computer device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and the processor implements the steps of the method for automatically delineating the target region for the radiotherapy of the nasal cavity NKT cell lymphoma when executing the computer program.
The present application further discloses a computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the method for automatically delineating the target region for radiotherapy of the nasal cavity NKT cell lymphoma.
Compared with the prior art, the invention has the beneficial effects that:
1. the target area and normal organs of the nasal cavity NK/T cell lymphoma are automatically delineated by a deep neural network model. After the model training is finished, only the CT image of the patient is input, the model can automatically generate the DICOM file containing the target area and the normal organ, and the image registration or the template configuration is not needed in the process.
2. The model adopts a plurality of multi-scale modules at the stage of an encoder, so that the focus characteristics of different sizes, different shapes, different positions and different densities can be effectively extracted. Meanwhile, the robustness of the model is further enhanced by the fact that the training set contains a large number of CT images of the patient.
3. As the decoders are simultaneously connected with the deep layer and the shallow layer characteristics, the convergence speed pair of the model can be increased, and the neurons in each decoder are added with the self-adaptive Dropout algorithm, the overfitting phenomenon of the model when the sample size is small is relieved, and the generalization capability of the model is improved.
4. After the deep neural network model is trained, the target area and normal organs of a patient can be quickly sketched out through GPU accelerated operation, so that the sketching efficiency can be accelerated, the artificial sketching errors can be reduced, and the waiting time of a patient suffering from nasal cavity NK/T cell lymphoma radiotherapy and the standardization and normalization of the sketching of the target area are reduced.
Drawings
The present application will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings, in which:
fig. 1 is a schematic flow chart of an automatic delineation method of a target region for nasal cavity NKT cell lymphoma radiotherapy according to the present application.
Fig. 2 is a schematic flow chart of a specific method for obtaining a nasal cavity NKT cell lymphoma delineation model according to the present application.
Fig. 3 is a schematic diagram of a deep neural network segmentation model according to the present application.
Fig. 4 is a flow chart of the adaptive Dropout optimization algorithm.
Fig. 5 is a schematic diagram of the structure of the agent module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Referring to fig. 1, in this embodiment, a method for automatically delineating a target region for radiotherapy of nasal cavity NKT cell lymphoma includes:
s101, collecting a delineated CT image of the NKT cell lymphoma of the nasal cavity;
the CT image of the nasal cavity NKT cell lymphoma is a tomography image of a patient collected and shot by a radiotherapy simulated positioning CT machine, and specifically, the interlayer spacing of the CT image is 3 mm.
The acquired CT image is used for a radiotherapy doctor to manually draw a target area and a normal organ layer by layer according to the CT image information.
S102, preprocessing a CT image to obtain a deep learning data set;
different from the traditional manual drawing mode of radiotherapy doctors based on CT images, the deep neural network can carry out deep abstraction on data and extract abstract features of the data at high latitude, and the extracted features have stronger expression capability and higher translation invariance. However, the deep neural network model often has a large number of parameters and needs a large-scale labeled data training model, so the CT data preprocessing includes converting target area and normal organ data outlined by a physician into a mask, and performing normalization, random scaling, rotation, translation and the like on CT image data of a training set according to image gray scale, thereby achieving the purpose of augmenting a training sample.
S103, training the deep learning data set in a deep neural network model to obtain a nasal cavity NKT cell lymphoma delineation model;
and S104, preprocessing the clinical CT image, and inputting the preprocessed clinical CT image into a nasal cavity NKT cell lymphoma delineation model for delineation.
The method comprises the steps of preprocessing patient clinical CT image data, wherein the preprocessing comprises image gray level normalization and cropping according to images.
Specifically, the delineation means that a target area and a normal organ contour of a clinical CT image are extracted, and contour edge lines are communicated to complete the delineation.
Preferably, after the clinical CT image is sketched, the clinical CT image is subjected to edge smoothing and impurity point removing processing and then converted into a DICOM file, and an automatic sketching result is generated.
Referring to fig. 2, in some embodiments, training the deep learning data set in a deep neural network model to obtain a delineation model of nasal NKT cell lymphoma comprises:
s201, dividing CT images in a deep learning data set into a training set and a verification set;
the deep learning data set is divided into two subsets in proportion, namely a training set and a verification set, wherein the training set is used for training internal parameters of the deep neural network model, and the verification set is used for quantitatively evaluating the segmentation effect of the model in the training process, checking the state of the model, converging the condition and adjusting the hyper-parameters.
Specifically, the number ratio of the training set to the validation set is 4: 1.
S202, inputting the CT image of the training set into a deep neural network model to obtain image characteristic data;
wherein, for a deep neural network, the deep neural network is composed of neurons and connections between the neurons. The neural network is divided into an input layer, a hidden layer and an output layer, wherein the hidden layer consists of one or more convolutional layers (CONV), an active layer (RELU), a pooling layer (POOL), an upsampling layer (UpSample) and an overlay layer (CONCAT). The deep neural network mainly extracts deep abstract features from an input image through a plurality of hidden layers, and an output layer divides the input image according to the extracted features.
S203, training the image characteristic data according to the residual error network to obtain a training model;
wherein, the residual error network adopts a resnet50 residual error network model. By adopting the resnet50 residual error network, the depth of the model can be deepened, and the gradient disappearance can be avoided to prevent the model from being degraded so as to improve the performance of the model; the model is connected through skipping, shallow layer meaning information and deep layer meaning information of the image are combined, and multi-scale drawing is conducted on the image. In the present application, a deep neural network model is designed as shown in fig. 3.
The encoder is composed of a series of residual network blocks based on resnet50, and is used for extracting abstract features such as edge texture from an input CT image. The rescet 50 residual network consists of five sets of convolutions, the first set of convolutions consisting of 1 convolutional layer conv (7 × 7 × 64, stride = 2) and 1 pooling layer maxpool (3 × 3, stride = 2); the second set of convolutions comprises 3 convolution blocks, each convolution block comprising mainly 3 convolution layers, conv (1 × 1 × 64), conv (3 × 3 × 64), conv (1 × 1 × 256), respectively; the third set of convolutions comprises 4 convolution blocks, each convolution block mainly comprising 3 convolution layers, conv (1 × 1 × 128), conv (3 × 3 × 128), conv (1 × 1 × 512); the fourth set of convolutions comprises 6 convolution blocks, each convolution block comprising mainly 3 convolution layers, conv (1 × 1 × 256), conv (3 × 3 × 256), conv (1 × 1 × 1024), respectively; the fifth set of convolutions comprises 3 convolutional blocks, each convolutional block comprising mainly 3 convolutional layers, conv (1 × 1 × 512), conv (3 × 3 × 512), conv (1 × 1 × 2048), respectively.
The decoder is used for restoring the extracted features to the size of the original input image and restoring the boundary information of the image data so as to predict the category of each pixel of the original CT image. The input of the decoder is from the deep layer characteristics of the resnet50 residual network module and the shallow layer characteristics of the hidden layer of the encoder, and the decoder fuses the deep layer characteristics and the shallow layer characteristics, so that the characteristic multiplexing is improved, and the convergence of the model is accelerated. The Upsample module in the decoder performs upsampling on the features with a certain multiplying power by adopting a bilinear interpolation mode, so that the pixel-level segmentation task is completed.
S204, inputting the CT images of the verification set into a training model to obtain evaluation parameters for evaluating the delineation effect;
after one complete forward propagation and one backward propagation of all training set images in the neural network, the delineation effect of the model on the neural network needs to be quantitatively evaluated by adopting the test set images. Preferably, the evaluation parameter is obtained by a set similarity metric function DSC, which is defined by the formula:
Figure 136990DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 540290DEST_PATH_IMAGE002
and
Figure 533653DEST_PATH_IMAGE003
respectively representing tag target area data and neural network predicted target area data.
Figure 84720DEST_PATH_IMAGE004
To represent
Figure 556153DEST_PATH_IMAGE005
And
Figure 627008DEST_PATH_IMAGE006
the overlapping area of (a). The closer the value of the DSC is to 1,
Figure 56853DEST_PATH_IMAGE007
and
Figure 32899DEST_PATH_IMAGE006
the smaller the difference, the better the delineation.
S205, comparing the evaluation parameters with preset parameters;
in step S204, when the set similarity metric function DSC is used to obtain the evaluation parameter, the preset parameter value range is 0-1. In combination with the actual situation, specifically, the value of the preset parameter is 0.9.
And S206, if the evaluation parameter is greater than or equal to the preset parameter, the training model is a nasal cavity NKT cell lymphoma delineation model.
In some embodiments, if the evaluation parameter is smaller than the preset parameter, adjusting the parameter of the deep neural network model;
the method for adjusting the parameters of the deep neural network model comprises the following steps:
adopting cross entropy as a loss function of the deep neural network model;
and iteratively updating the loss function through a gradient descent algorithm until the loss function is converged to obtain the optimal deep neural network parameters.
In this embodiment, after the deep neural network model is constructed, the model may be trained. The specific training process is to input an estimated value of an image forward conduction output segmentation image, minimize a loss function between the segmentation image output by the deep neural network model and a labeled image according to a reverse conduction technology, and find an optimal neural network model parameter. In the invention, cross entropy is used as a loss function of a model, and is defined as follows:
Figure 104760DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 482652DEST_PATH_IMAGE009
and
Figure 817818DEST_PATH_IMAGE010
respectively representing the output image and the label image of the model network. Loss letter is calculated through networkThe number L is used for the gradient of the weight value, and a gradient descent algorithm is adopted in an iteration mode, so that the value of the loss function can be continuously reduced, and a group of appropriate model parameters can be found. The gradient descent algorithm is as follows:
Figure 343478DEST_PATH_IMAGE011
wherein
Figure 156713DEST_PATH_IMAGE012
The learning rate constant is represented.
In some embodiments, the deep neural network model employs an adaptive Dropout optimization algorithm to solve the problem of overfitting that exists in the deep neural network model.
Referring to fig. 4, in detail, the method for solving the overfitting problem existing in the deep neural network model by using the adaptive Dropout optimization algorithm in the deep neural network model includes:
s401, constructing an agent module, wherein the agent module comprises a global mean pooling layer, a full connection layer, a nonlinear activation layer and a logistic regression layer;
s402, obtaining the discarding probability of the neurons in the deep neural network model through the agent module;
the process of the agent module for obtaining the discarding probability of the neuron is that for the three-dimensional abstract features extracted by the convolutional layer, the three-dimensional abstract features are firstly reduced to two dimensions based on the global mean pooling layer so as to encode the whole spatial features on one channel into one global feature. And then mapping the two-dimensional features based on the full connection layer to construct the correlation among channels, carrying out nonlinear processing on the features based on the nonlinear activation layer, and finally realizing normalization based on the logistic regression layer to keep the value of the discarding probability between 0 and 1 all the time. Where the discard probability can be used to assess the significance of a neuron, a value closer to 1 indicates that the neuron is more significant.
Specifically, the nonlinear activation function of the nonlinear activation layer function adopts a ReLu function;
specifically, the logistic regression function of the logistic regression layer adopts a Sigmoid function.
Referring to fig. 5, the structure of the proxy module can be described by the following formula:
Figure 451428DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 223075DEST_PATH_IMAGE014
represents the drop probability of the ith neuron, Sigmoid represents a logistic regression function, f represents a nonlinear activation function, GAP represents global mean pooling,
Figure 908134DEST_PATH_IMAGE015
the connection weight of the nth fully connected layer is represented, and a represents a neuron.
S403, comparing the discarding probability with a preset importance threshold;
s404, if the discarding probability is larger than the importance threshold, the neurons are randomly discarded according to the discarding probability.
In this embodiment, in the existing deep neural network model, the discarding probability of a neuron in the conventional Dropout algorithm follows the bernoulli distribution of the probability θ, neurons in the same layer share the hyper-parameter θ, and the optimal discarding probability θ value is obtained by means of grid search. That is, in the existing deep neural network model, discarding operation is performed by adopting all neurons in the traditional Dropout algorithm according to the probability θ to avoid overfitting of the deep neural network model.
In the deep neural network model, a proxy module is designed, the importance of the neurons is quantified and normalized and evaluated through a Sigmoid function, and the more the value of the proxy module is close to 1, the more important the neurons are. In the training process of the deep neural network model, the connection weight W of the full connection layer is learned and changed. Then, as known from the structural formula of the agent module, the change of the connection weight W can affect the discarding probability of a single neuron
Figure 338109DEST_PATH_IMAGE016
I.e. drop probability
Figure 690593DEST_PATH_IMAGE016
Can be adjusted adaptively and variably during the training process of the deep neural network model.
Thus, the agent module measures the drop probability of each neuron
Figure 367562DEST_PATH_IMAGE016
Probability of discarding
Figure 805497DEST_PATH_IMAGE016
It is automatically learned from training data, rather than being set to a fixed value. And in order to avoid discarding the secondary important neurons in the training process and simultaneously perform random punishment on the neurons containing the importance characteristics, the importance threshold value beta is introduced into the agent module. When the discarding probability of the neuron is larger than the importance threshold, the neuron is randomly discarded according to the discarding probability, and the rest neurons are always left. That is, the output of the agent module obeys the bernoulli distribution by applying the importance threshold, thereby realizing the adaptive Dropout for the neuron.
Specifically, the preset importance threshold β is set manually, and specific parameter values thereof need to be set according to specific tasks.
That is, in the present application, the agent module determines each neuron and determines whether to discard the neuron, so as to implement adaptive Dropout on the neuron. Therefore, compared with the traditional Dropout algorithm, the self-adaptive Dropout algorithm can effectively prevent the characteristics from being dependent on part of neurons, reduce the co-adaptability of the neurons and enhance the generalization capability of the network. The problem that the existing Dropout algorithm is difficult to determine the proper Dropout parameter is solved, meanwhile, the application range of the traditional Dropout algorithm is limited and is usually limited to the last layers of a neural network, and the adaptive Dropout algorithm can further expand the application range of the Dropout algorithm.
Referring to the figures, in some embodiments, an apparatus for automatically delineating a target region for nasal NKT cell lymphoma radiotherapy comprises:
the image acquisition module is used for acquiring a CT image of delineated nasal cavity NKT cell lymphoma;
the image processing module is used for preprocessing the CT image to obtain a deep learning data set;
the model generation module is used for training the deep learning data set in the deep neural network model to obtain a nasal cavity NKT cell lymphoma delineation model;
and the automatic delineation module is used for preprocessing the clinical CT image and inputting the preprocessed clinical CT image into the nasal cavity NKT cell lymphoma delineation model for delineation.
In some embodiments, the present application further discloses a computer device, which is characterized by comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above-mentioned method for automatically delineating the target region for radiotherapy of the nasal cavity NKT cell lymphoma when executing the computer program.
The computer device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the computer device. Of course, the memory may also include both internal and external storage devices of the computer device. In this embodiment, the memory is commonly used for storing an operating system and various application software installed on the computer device, such as program codes for automatically delineating target regions of luminal NKT cell lymphoma radiotherapy based on deep learning, and the like. In addition, the memory may also be used to temporarily store various types of data that have been output or are to be output.
The processor may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor is typically used to control the overall operation of the computer device. In this embodiment, the processor is configured to execute the program code stored in the memory or to process data, such as executing the program code of the method for automatically delineating the target region of the deep learning-based luminal NKT cell lymphoma radiotherapy.
In some embodiments, the present application further discloses a computer-readable storage medium, wherein the computer-readable storage medium stores thereon a computer program, and when the computer program is executed by a processor, the computer program implements the above-mentioned steps of the method for automatically delineating the target region for radiotherapy of the nasal cavity NKT cell lymphoma.
Wherein the computer readable storage medium stores an interface display program executable by at least one processor to cause the at least one processor to perform the steps of the depth image acquisition method based on depth learning as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
The above is an embodiment of the present invention. The embodiments and specific parameters in the embodiments are only used for clearly illustrating the verification process of the invention and are not used for limiting the patent protection scope of the invention, which is defined by the claims, and all the equivalent structural changes made by using the contents of the description and the drawings of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for automatically delineating a radiotherapy target area of nasal cavity NKT cell lymphoma is characterized by comprising the following steps:
acquiring a delineated CT image of the NKT cell lymphoma of the nasal cavity;
preprocessing the CT image to obtain a deep learning data set;
training the deep learning data set in a deep neural network model to obtain a nasal cavity NKT cell lymphoma delineation model;
preprocessing a clinical CT image and inputting the preprocessed clinical CT image into the nasal cavity NKT cell lymphoma delineation model for delineation;
the problem of overfitting existing in the deep neural network model is solved by the deep neural network model through an adaptive Dropot optimization algorithm, and the problem of overfitting existing in the deep neural network model through the adaptive Dropot optimization algorithm comprises the following steps:
constructing an agent module, wherein the agent module comprises a global mean pooling layer, a full connection layer, a nonlinear activation layer and a logistic regression layer;
obtaining the discarding probability of the neurons in the deep neural network model through the agent module;
comparing the discarding probability with a preset importance threshold;
and if the discarding probability is larger than the importance threshold, randomly discarding the neuron according to the discarding probability.
2. The method of claim 1, wherein the training of the deep learning dataset in a deep neural network model to obtain a mapping model of nasal NKT cell lymphoma comprises:
dividing the CT images in the deep learning data set into a training set and a verification set;
inputting the CT image of the training set into a deep neural network model to obtain image characteristic data;
training the image characteristic data according to a residual error network to obtain a training model;
inputting the CT images of the verification set into the training model to obtain evaluation parameters for evaluating the delineation effect;
comparing the evaluation parameter with a preset parameter;
if the evaluation parameter is greater than or equal to the preset parameter, the training model is a nasal cavity NKT cell lymphoma delineation model.
3. The method of claim 2, wherein the target region for nasal NKT cell lymphoma radiotherapy is automatically delineated:
if the evaluation parameter is smaller than the preset parameter, adjusting the parameter of the deep neural network model;
the method for adjusting the parameters of the deep neural network model comprises the following steps:
adopting cross entropy as a loss function of the deep neural network model;
and iteratively updating the loss function through a gradient descent algorithm until the loss function is converged to obtain the optimal deep neural network parameters.
4. The method of claim 2, wherein the target region for nasal NKT cell lymphoma radiotherapy is automatically delineated:
the evaluation parameters are obtained by means of a collective similarity metric function DSC.
5. The method of claim 2, wherein the target region for nasal NKT cell lymphoma radiotherapy is automatically delineated:
the residual network employs a resnet50 residual network model.
6. The utility model provides an automatic device that sketches of nasal cavity NKT cell lymphoma radiotherapy target area which characterized in that includes:
an image acquisition module for acquiring a CT image of delineated nasal NKT cell lymphoma;
the image processing module is used for preprocessing the CT image to obtain a deep learning data set;
the model generation module is used for training the deep learning data set in a deep neural network model to obtain a nasal cavity NKT cell lymphoma delineation model;
the automatic delineation module is used for preprocessing a clinical CT image and inputting the preprocessed clinical CT image into a nasal cavity NKT cell lymphoma delineation model for delineation;
the problem of overfitting existing in the deep neural network model is solved by the deep neural network model through an adaptive Dropot optimization algorithm, and the problem of overfitting existing in the deep neural network model through the adaptive Dropot optimization algorithm comprises the following steps:
constructing an agent module, wherein the agent module comprises a global mean pooling layer, a full connection layer, a nonlinear activation layer and a logistic regression layer;
obtaining the discarding probability of the neurons in the deep neural network model through the agent module;
comparing the discarding probability with a preset importance threshold;
and if the discarding probability is larger than the importance threshold, randomly discarding the neuron according to the discarding probability.
7. A computer device, comprising a memory in which a computer program is stored and a processor, which when executed implements the steps of the method for automatically delineating a target region for nasal NKT-cell lymphoma radiotherapy according to any one of claims 1 to 5.
8. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for automatically delineating a target region for nasal NKT-cell lymphoma radiotherapy according to any one of claims 1 to 5.
CN202110562553.6A 2021-05-24 2021-05-24 Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma Active CN113035334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110562553.6A CN113035334B (en) 2021-05-24 2021-05-24 Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110562553.6A CN113035334B (en) 2021-05-24 2021-05-24 Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma

Publications (2)

Publication Number Publication Date
CN113035334A CN113035334A (en) 2021-06-25
CN113035334B true CN113035334B (en) 2021-10-29

Family

ID=76455750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110562553.6A Active CN113035334B (en) 2021-05-24 2021-05-24 Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma

Country Status (1)

Country Link
CN (1) CN113035334B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113539402B (en) * 2021-07-14 2022-04-01 广州柏视医疗科技有限公司 Multi-mode image automatic sketching model migration method
CN114862881A (en) * 2022-07-11 2022-08-05 四川大学 Cross-modal attention tumor segmentation method and system based on PET-CT

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111542830A (en) * 2017-12-29 2020-08-14 徕卡生物系统成像股份有限公司 Processing histological images using convolutional neural networks to identify tumors
CN112270660A (en) * 2020-09-30 2021-01-26 四川大学 Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014053992A1 (en) * 2012-10-02 2014-04-10 Telefonaktiebolaget L M Ericsson (Publ) Method and system for radio service optimization using active probing over transport networks
CN109977955B (en) * 2019-04-03 2021-11-30 南昌航空大学 Cervical carcinoma pre-lesion identification method based on deep learning
CN110414665A (en) * 2019-05-21 2019-11-05 浙江工业大学 A kind of network representation learning method based on deep neural network
CN112508965B (en) * 2019-12-10 2023-08-22 广州柏视医疗科技有限公司 Automatic outline sketching system for normal organs in medical image
AU2020100710A4 (en) * 2020-05-05 2020-06-11 Chen, Dadu Mr A method for sentiment analysis of film reviews based on deep learning and natural language processing
CN112729825A (en) * 2020-12-18 2021-04-30 中车永济电机有限公司 Method for constructing bearing fault diagnosis model based on convolution cyclic neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111542830A (en) * 2017-12-29 2020-08-14 徕卡生物系统成像股份有限公司 Processing histological images using convolutional neural networks to identify tumors
CN112270660A (en) * 2020-09-30 2021-01-26 四川大学 Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network

Also Published As

Publication number Publication date
CN113035334A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
Men et al. More accurate and efficient segmentation of organs‐at‐risk in radiotherapy with convolutional neural networks cascades
CN110503654B (en) Medical image segmentation method and system based on generation countermeasure network and electronic equipment
US11676281B2 (en) 3-D convolutional neural networks for organ segmentation in medical images for radiotherapy planning
CN112270660B (en) Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
US20180260951A1 (en) Deep Image-to-Image Recurrent Network with Shape Basis for Automatic Vertebra Labeling in Large-Scale 3D CT Volumes
JP2022538866A (en) System and method for image preprocessing
WO2022001623A1 (en) Image processing method and apparatus based on artificial intelligence, and device and storage medium
CN113035334B (en) Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma
CN112270666A (en) Non-small cell lung cancer pathological section identification method based on deep convolutional neural network
Rahman et al. A new method for lung nodule detection using deep neural networks for CT images
CN110472049B (en) Disease screening text classification method, computer device and readable storage medium
US11430123B2 (en) Sampling latent variables to generate multiple segmentations of an image
Peng et al. H-ProSeg: Hybrid ultrasound prostate segmentation based on explainability-guided mathematical model
Seo et al. Deep neural network with consistency regularization of multi-output channels for improved tumor detection and delineation
Ghosal et al. Lung Nodule Classification Using Convolutional Autoencoder and Clustering Augmented Learning Method (CALM).
Ma et al. A combined learning algorithm for prostate segmentation on 3D CT images
CN113706486A (en) Pancreas tumor image segmentation method based on dense connection network migration learning
Wang et al. Superpixel inpainting for self-supervised skin lesion segmentation from dermoscopic images
CN113920137B (en) Lymph node metastasis prediction method, device, equipment and storage medium
Agarwal et al. Weakly supervised lesion co-segmentation on ct scans
Li et al. SAP‐cGAN: Adversarial learning for breast mass segmentation in digital mammogram based on superpixel average pooling
Khaniki et al. Enhancing Pneumonia Detection using Vision Transformer with Dynamic Mapping Re-Attention Mechanism
CN111783796A (en) PET/CT image recognition system based on depth feature fusion
CN114612478B (en) Female pelvic cavity MRI automatic sketching system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant