CN114581474A - Automatic clinical target area delineation method based on cervical cancer CT image - Google Patents

Automatic clinical target area delineation method based on cervical cancer CT image Download PDF

Info

Publication number
CN114581474A
CN114581474A CN202210458685.9A CN202210458685A CN114581474A CN 114581474 A CN114581474 A CN 114581474A CN 202210458685 A CN202210458685 A CN 202210458685A CN 114581474 A CN114581474 A CN 114581474A
Authority
CN
China
Prior art keywords
image
loss function
improved
net network
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210458685.9A
Other languages
Chinese (zh)
Inventor
陶振超
樊一展
陈欢欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202210458685.9A priority Critical patent/CN114581474A/en
Publication of CN114581474A publication Critical patent/CN114581474A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention relates to the technical field of medical image processing, and discloses an automatic delineation method of a clinical target area based on a cervical cancer CT image, which comprises the following steps: preprocessing the CT image in the test data set; performing secondary classification on each pixel point in the CT image through a pre-trained improved U-net network to obtain a secondary classification label of each pixel point of the CT image; drawing the contour line of the clinical target area of the CT image according to the two classification labels; the invention obtains good performance on the cervical cancer CT scanning image data set through the improved U-net network and the loss function.

Description

Automatic clinical target area delineation method based on cervical cancer CT image
Technical Field
The invention relates to the technical field of medical image processing, in particular to an automatic delineation method of a clinical target area based on a cervical cancer CT image.
Background
Cervical cancer is a common gynecological cancer.
In modern medicine, the treatment of cervical cancer is mainly radiotherapy, i.e. the irradiation of a target area delimited by a professional oncologist with a large number of radioactive particles. In addition to the tumor itself, its lymphatic drainage area, i.e. the Clinical Target Volume (CTV), is also encompassed by radiotherapy. Therefore, accurate segmentation of CTV regions in Computed Tomography (CT) images is of great significance for the treatment of cervical cancer.
In the field of image segmentation, the conventional segmentation method based on image processing technology generally does not work well in accuracy and efficiency, and the method based on deep learning also faces some challenges in our task. For example: the target volume of the cervical cancer CTV is generally small, the boundary of the region is unclear, and the cervical cancer CTV is difficult to identify; the data marking is long in time consumption and involves factors such as patient privacy and the like, so that the training data is limited in scale; the distribution of the data sets is unbalanced. There are also the following problems:
1. when the neural network is used for image recognition, a large amount of labeled data is often needed as training samples, the training time is long, and the medical image is often difficult to collect a large amount of labeled data for training.
2. The classic U-net network structure is suitable for recognizing images of a plurality of objects with relatively clear boundaries, but the segmentation effect of the images with unobvious target boundaries is relatively poor, and normal tissues and organs such as intestinal tracts, bladders, cervixes, uterine attachments, blood vessels and the like exist in a pelvic cavity to influence the determination of the target boundaries.
3. Other common image segmentation models mostly adopt multi-stage tasks, the model complexity is high, the training time is long, the data set structure is complex, certain migration cost exists, and therefore a neural network which is light in weight, high in training speed, small in required data amount and good in effect is needed.
Disclosure of Invention
In order to solve the technical problems, the invention provides an automatic clinical target area delineation method based on cervical carcinoma CT images.
In order to solve the technical problems, the invention adopts the following technical scheme:
a clinical target area automatic delineation method based on cervical carcinoma CT images comprises the following steps:
the method comprises the following steps: preprocessing the CT image in the test data set;
step two: performing secondary classification on each pixel point in the CT image through a pre-trained improved U-net network to obtain a secondary classification label of each pixel point of the CT image;
step three: drawing the contour line of the clinical target area of the CT image according to the two classification labels;
the training process of the improved U-net network comprises a down-sampling path and an up-sampling path:
the up-sampling path comprises four encoders, each encoder comprises two convolution layers, a batch normalization layer and a ReLU activation function, and the batch normalization layer and the ReLU activation function are positioned behind each convolution layer, and pooling down-sampling is performed behind each encoder;
the down-sampling path comprises four decoders, each decoder comprises two convolution layers, a batch normalization layer and a ReLU activation function, wherein the batch normalization layer and the ReLU activation function are positioned behind each convolution layer, and bilinear up-sampling and convolution operation are carried out behind each decoder;
jumping connection is carried out on feature graphs with the same channel dimensionality in the up-sampling path and the down-sampling path; the position of the jump connection is positioned after each coder carries out pooling downsampling;
preprocessing the CT image in the training data set and inputting the preprocessed CT image into an improved U-net network to obtain a feature map capable of performing secondary classification on each pixel point of the CT image; the loss function used in the training process is the sum of the first loss function and the second loss function after weighting; the first loss function is used for measuring the similarity between the prediction probability and the real label, and the second loss function is used for correcting the weight of the two classification labels; loss function one:
Figure 743344DEST_PATH_IMAGE001
a second loss function:
Figure 842887DEST_PATH_IMAGE002
(ii) a Since the target area usually occupies only a small portion of the image, this loss function can be effectively focused on the targetThe region is arranged, so that the problem that the traditional loss function easily ignores the micro region is solved;
Figure 855842DEST_PATH_IMAGE003
is a pixel point
Figure 302129DEST_PATH_IMAGE004
The probability of prediction of (a) is,
Figure 716930DEST_PATH_IMAGE005
is a pixel point
Figure 925058DEST_PATH_IMAGE006
The real label of (a) is,
Figure 690888DEST_PATH_IMAGE007
for the smoothing parameter, N is the number of pixels of the CT image,
Figure 675287DEST_PATH_IMAGE008
and
Figure 413436DEST_PATH_IMAGE009
in order to be a weight term, the weight term,
Figure 854782DEST_PATH_IMAGE010
is a weight control factor;
and (3) carrying out 1 x 1 filling on the feature map obtained after each convolution in the improved U-net network, and not carrying out filling on the initial CT image input into the improved U-net network.
Specifically, when the CT images in the test data set and the training data set are preprocessed, affine transformation is performed on the gray value of each CT image, the gray value is converted into a hounsfield unit value, the contrast of the CT image is further improved through window level transformation, and finally center clipping is performed.
Compared with the prior art, the invention has the beneficial technical effects that:
the invention obtains good performance on the cervical cancer CT scanning image data set through the improved U-net network and the loss function. Specifically, the present invention can finely process the boundary of the target region, and can adapt to a case where the size of the data set is limited because it is very lightweight, the training time is short, and furthermore, the loss function performs well in the small target detection task by adjusting the weight of the classification section.
Drawings
FIG. 1 is a block diagram of an improved U-net network of the present invention;
fig. 2 is a flow chart of the construction of the improved U-net network of the present invention.
Detailed Description
A preferred embodiment of the present invention will be described in detail below with reference to the accompanying drawings.
In the automatic delineation method of the clinical target area in the embodiment, the CT image is converted into the window width and the window level of the optimal pelvic cavity area, so that the optimal computer visual effect of the CT image is obtained; by modifying the U-net network structure, filling 1 x 1 in each convolution result of the network structure, and ensuring that the output characteristic diagram of each convolution layer is the same as the input size; a new weighting loss function is adopted to replace the cross entropy loss of the existing U-net network structure, and different types of weights are effectively corrected; the time for a radiotherapy doctor to delineate a cervical cancer clinical target area can be reduced, and the delineation consistency is improved.
The method specifically comprises the following steps:
s1: and preprocessing the CT image in the test data set.
The preprocessing of the test data set and the training data set is: performing affine transformation on the gray value of each CT image, and converting the gray value into a Hounsfield Unit value (Hu value), wherein the slope and the intercept can be read from the metadata of the original dicom file; the CT images in the test data set and the training data set come from the pelvic region; considering the low contrast of the CT image, the contrast of the CT image is improved through window level conversion, because the CT image comes from the pelvis region, the window width and the window level select the optimal value corresponding to the pelvis region, the window level is adjusted to be 45, and the window width is 280, so as to obtain the optimal machine vision effect of the CT image; finally, center cutting of 256 pixels x 256 pixels is carried out, and data of the region of interest is obtained.
S2: and performing secondary classification on each pixel point in the CT image through the pre-trained improved U-net network to obtain a secondary classification label of each pixel point of the CT image.
The U-net network can use effective marking data more effectively by means of data enhancement from a few training images. However, since the target region of the CT image is usually located in the middle of the image rather than near the edge, the U-net network in the prior art is difficult to be applied in the present invention, and the effect of delineating the clinical target region is not good. Therefore, the invention modifies the U-net network to obtain an improved U-net network.
The training process of the improved U-net network comprises a down-sampling path and an up-sampling path:
the upsampling path comprises four encoders, each encoder comprises two convolutional layers and a batch normalization layer and a ReLU activation function which are positioned behind each convolutional layer, and pooling downsampling is carried out behind each encoder.
The down-sampling path comprises four decoders, each decoder comprises two convolution layers and a batch normalization layer and a ReLU activation function which are positioned after each convolution layer, and each decoder is followed by bilinear up-sampling and convolution operations, namely: the transposed convolution upsampling operation used by existing U-net network decoders is replaced by a combination of bilinear upsampling and convolution operations, thereby avoiding the checkerboard effect.
The encoder and decoder are connected by a structure called a transition block, which is intended to handle the connection operation between them, and mainly consists of two 3 × 3 convolution structures.
Jumping connection is carried out on feature graphs with the same channel dimensionality in the up-sampling path and the down-sampling path; the position of the jump connection is positioned after each coder carries out pooling downsampling so as to obtain deeper information of the original image; whereas in existing U-net networks the location of the hopping connection is located before the pooling downsampling.
And preprocessing the CT image in the training data set and inputting the preprocessed CT image into an improved U-net network to obtain a characteristic diagram capable of performing secondary classification on each pixel point of the CT image.
Instead of initially filling in the CT image, 1 x 1 filling is performed in each convolution of the modified U-net network to ensure that the output profile of each layer is the same size as the input profile.
The improved U-net network finally outputs two characteristic graphs for obtaining the target suspected probability of the clinical target area of each pixel point of the CT image in two clinical scenes, and the two-classification labeling processing is carried out to obtain two-classification label data.
S3: and drawing the contour line of the clinical target area of the CT image according to the two-classification label data.
The loss function used in the training process is the sum of the first loss function and the second loss function after weighting, and the weights of the first loss function and the second loss function can be set according to requirements; the first loss function is used for measuring the similarity between the prediction probability and the real label, and the second loss function controls the weight of a region with larger prediction probability through a power function; loss function one:
Figure 842329DEST_PATH_IMAGE001
a second loss function:
Figure 332216DEST_PATH_IMAGE002
Figure 957495DEST_PATH_IMAGE011
is a pixel point
Figure 569742DEST_PATH_IMAGE012
The probability of prediction of (a) is,
Figure 44586DEST_PATH_IMAGE013
is a pixel point
Figure 134901DEST_PATH_IMAGE014
The real label of (a) is,
Figure 582063DEST_PATH_IMAGE007
for smoothing ginsengThe number N is the number of pixels of the CT image,
Figure 323799DEST_PATH_IMAGE008
and
Figure 551518DEST_PATH_IMAGE015
in order to be a weight term, the weight term,
Figure 648787DEST_PATH_IMAGE010
is a weight control factor. The real label, i.e. the ground channel, takes a value of 0 or 1, if the target area is found, the target area is taken as 1, and if the target area is not found, the target area is taken as 0.
Aiming at the problems to be solved by the invention, the training process of the U-net network and the network is modified as follows: filling 1 x 1 in each convolution in the network, ensuring that the output feature map of each layer has the same size as the input, so as to process the difference that the target region of the cervical cancer CTV is positioned in the middle of the image rather than near the edge; the position of the jump connection is adjusted and is placed after the down-sampling operation of each layer in the encoder so as to obtain deeper information of the original image; the transposition convolution operation used by the decoder is replaced by the combination of bilinear upsampling and convolution operation, so that the domino effect is avoided; a new loss function is provided to replace cross entropy loss, and weights of different classes can be effectively corrected to achieve good training speed. The method can give consideration to the indexes such as accuracy, sensitivity and the like, the sketching area is small, and scattered areas can be sketched accurately. Tests are carried out on a cervical cancer radiotherapy target data set, and the cervical cancer radiotherapy target data set is almost the same as a traditional U-net model in terms of accuracy, no difference exists in training time, but a segmentation boundary is clearer and smoother, and the cervical cancer radiotherapy target data set has obvious advantages in indexes such as recall rate, Dice and IoU.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not intended to be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (2)

1. A clinical target area automatic delineation method based on cervical carcinoma CT images comprises the following steps:
the method comprises the following steps: preprocessing the CT image in the test data set;
step two: performing secondary classification on each pixel point in the CT image through a pre-trained improved U-net network to obtain a secondary classification label of each pixel point of the CT image;
step three: drawing the contour line of the clinical target area of the CT image according to the two classification labels;
the training process of the improved U-net network comprises a down-sampling path and an up-sampling path:
the up-sampling path comprises four encoders, each encoder comprises two convolution layers, a batch normalization layer and a ReLU activation function, and the batch normalization layer and the ReLU activation function are positioned behind each convolution layer, and pooling down-sampling is performed behind each encoder;
the down-sampling path comprises four decoders, each decoder comprises two convolution layers, a batch normalization layer and a ReLU activation function, wherein the batch normalization layer and the ReLU activation function are positioned behind each convolution layer, and bilinear up-sampling and convolution operation are carried out behind each decoder;
jumping connection is carried out on feature graphs with the same channel dimensionality in the up-sampling path and the down-sampling path; the position of the jump connection is positioned after each coder carries out pooling downsampling;
preprocessing the CT image in the training data set and inputting the preprocessed CT image into an improved U-net network to obtain a feature map capable of performing secondary classification on each pixel point of the CT image; the loss function used in the training process is the sum of the first loss function and the second loss function after weighting; the first loss function is used for measuring the similarity between the prediction probability and the real label, and the second loss function is used for correcting the weight of the two classification labels;
loss function one:
Figure 372364DEST_PATH_IMAGE001
a second loss function:
Figure 640534DEST_PATH_IMAGE002
Figure 960657DEST_PATH_IMAGE003
is a pixel point
Figure 921660DEST_PATH_IMAGE004
The probability of prediction of (a) is,
Figure 461488DEST_PATH_IMAGE005
is a pixel point
Figure 533349DEST_PATH_IMAGE006
The real label of (a) is,
Figure 973558DEST_PATH_IMAGE007
for the smoothing parameter, N is the number of pixels of the CT image,
Figure 839882DEST_PATH_IMAGE008
and
Figure 365542DEST_PATH_IMAGE009
in order to be a weight term, the weight term,
Figure 975515DEST_PATH_IMAGE010
is a weight control factor;
and (3) carrying out 1 x 1 filling on the feature map obtained after each convolution in the improved U-net network, and not filling the initial CT image input into the improved U-net network.
2. The cervical cancer CT image-based clinical target area automatic delineation method according to claim 1, characterized in that: when CT images in the test data set and the training data set are preprocessed, affine transformation is carried out on the gray value of each CT image, the gray value is converted into a Hounsfield Unit value, the contrast of the CT images is improved through window level transformation, and finally center cutting is carried out.
CN202210458685.9A 2022-04-28 2022-04-28 Automatic clinical target area delineation method based on cervical cancer CT image Pending CN114581474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210458685.9A CN114581474A (en) 2022-04-28 2022-04-28 Automatic clinical target area delineation method based on cervical cancer CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210458685.9A CN114581474A (en) 2022-04-28 2022-04-28 Automatic clinical target area delineation method based on cervical cancer CT image

Publications (1)

Publication Number Publication Date
CN114581474A true CN114581474A (en) 2022-06-03

Family

ID=81784920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210458685.9A Pending CN114581474A (en) 2022-04-28 2022-04-28 Automatic clinical target area delineation method based on cervical cancer CT image

Country Status (1)

Country Link
CN (1) CN114581474A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409837A (en) * 2022-11-01 2022-11-29 北京大学第三医院(北京大学第三临床医学院) Endometrial cancer CTV automatic delineation method based on multi-modal CT image
CN115631178A (en) * 2022-11-03 2023-01-20 昆山润石智能科技有限公司 Automatic wafer defect detection method, system, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211140A (en) * 2019-06-14 2019-09-06 重庆大学 Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function
CN112233117A (en) * 2020-12-14 2021-01-15 浙江卡易智慧医疗科技有限公司 New coronary pneumonia CT detects discernment positioning system and computing equipment
CN114219943A (en) * 2021-11-24 2022-03-22 华南理工大学 CT image organ-at-risk segmentation system based on deep learning
CN114240962A (en) * 2021-11-23 2022-03-25 湖南科技大学 CT image liver tumor region automatic segmentation method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211140A (en) * 2019-06-14 2019-09-06 重庆大学 Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function
CN112233117A (en) * 2020-12-14 2021-01-15 浙江卡易智慧医疗科技有限公司 New coronary pneumonia CT detects discernment positioning system and computing equipment
CN114240962A (en) * 2021-11-23 2022-03-25 湖南科技大学 CT image liver tumor region automatic segmentation method based on deep learning
CN114219943A (en) * 2021-11-24 2022-03-22 华南理工大学 CT image organ-at-risk segmentation system based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
梁德澎: ""反卷积和上采样+卷积的区别?"", 《HTTPS://WWW.ZHIHU.COM/QUESTION/328891283》 *
董洪义: "《深度学习之PyTorch物体检测实战[M]》", 31 January 2020 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409837A (en) * 2022-11-01 2022-11-29 北京大学第三医院(北京大学第三临床医学院) Endometrial cancer CTV automatic delineation method based on multi-modal CT image
CN115631178A (en) * 2022-11-03 2023-01-20 昆山润石智能科技有限公司 Automatic wafer defect detection method, system, equipment and storage medium
CN115631178B (en) * 2022-11-03 2023-11-10 昆山润石智能科技有限公司 Automatic wafer defect detection method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
Almajalid et al. Development of a deep-learning-based method for breast ultrasound image segmentation
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112270660B (en) Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network
CN112102321B (en) Focal image segmentation method and system based on depth convolution neural network
WO2023221954A1 (en) Pancreatic tumor image segmentation method and system based on reinforcement learning and attention
CN110321920A (en) Image classification method, device, computer readable storage medium and computer equipment
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
CN109363698A (en) A kind of method and device of breast image sign identification
CN113674253A (en) Rectal cancer CT image automatic segmentation method based on U-transducer
CN110689525A (en) Method and device for recognizing lymph nodes based on neural network
CN109363699A (en) A kind of method and device of breast image lesion identification
CN110751636A (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN109363697A (en) A kind of method and device of breast image lesion identification
CN114998265A (en) Liver tumor segmentation method based on improved U-Net
CN115471470A (en) Esophageal cancer CT image segmentation method
CN112750137A (en) Liver tumor segmentation method and system based on deep learning
CN114332572B (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
CN114022491B (en) Small data set esophageal cancer target area image automatic delineation method based on improved spatial pyramid model
CN113538363A (en) Lung medical image segmentation method and device based on improved U-Net
CN112862783A (en) Thyroid CT image nodule automatic diagnosis system based on neural network
CN111062909A (en) Method and equipment for judging benign and malignant breast tumor
CN113379691B (en) Breast lesion deep learning segmentation method based on prior guidance
CN112967295B (en) Image processing method and system based on residual network and attention mechanism
CN115937423A (en) Three-dimensional intelligent reconstruction method for liver tumor medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220603

RJ01 Rejection of invention patent application after publication