CN112950615B - Thyroid nodule invasiveness prediction method based on deep learning segmentation network - Google Patents

Thyroid nodule invasiveness prediction method based on deep learning segmentation network Download PDF

Info

Publication number
CN112950615B
CN112950615B CN202110307664.2A CN202110307664A CN112950615B CN 112950615 B CN112950615 B CN 112950615B CN 202110307664 A CN202110307664 A CN 202110307664A CN 112950615 B CN112950615 B CN 112950615B
Authority
CN
China
Prior art keywords
nodule
network
thyroid
wavelet
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110307664.2A
Other languages
Chinese (zh)
Other versions
CN112950615A (en
Inventor
郑志强
王雨禾
翁智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University
Original Assignee
Inner Mongolia University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University filed Critical Inner Mongolia University
Priority to CN202110307664.2A priority Critical patent/CN112950615B/en
Publication of CN112950615A publication Critical patent/CN112950615A/en
Application granted granted Critical
Publication of CN112950615B publication Critical patent/CN112950615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a thyroid nodule invasiveness prediction method based on a deep learning segmentation network. The method comprises the following steps: s1: preprocessing a thyroid ultrasound image obtained clinically; s2: constructing a main structure framework based on a deep learning segmentation network; s3: improving a generation countermeasure network model in the main structure frame; s4: carrying out accurate semantic segmentation on thyroid nodules, and counting information of nodule areas, aspect ratios and contour rule degrees; s5: acquiring a new image data set which is cut and only contains knots; s6: and improving the nonlinear expression capability of the classification network model, and S7: and classifying the prediction result by using the improved classification network model, and training and updating the classification network model. The method provided by the invention can realize end-to-end automatic auxiliary diagnosis and overcome the defects of insufficient accuracy and low detection rate of the traditional detection method.

Description

Thyroid nodule invasiveness prediction method based on deep learning segmentation network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a thyroid nodule invasiveness prediction method based on a deep learning segmentation network.
Background
The thyroid gland is the largest endocrine gland of a human body, the ultrasonic examination can qualitatively and quantitatively estimate the size, the volume and the blood flow of the thyroid gland, and qualitative or semi-quantitative diagnosis can be carried out on benign and malignant tumors, so that the ultrasonic detection method also becomes the preferred method for image examination of thyroid diseases. The traditional thyroid ultrasonic detection result is mainly judged by a doctor according to experience, and conclusion prediction is made. After the image recognition technology is introduced, various detection systems based on classification can replace manual work to process and predict ultrasonic image data, so that the detection efficiency of images is greatly improved; meanwhile, the image recognition processing technology and the manual experience are combined, a computer gives out a preliminary prediction conclusion, and then a doctor rechecks the conclusion, so that the misjudgment rate and the workload of the doctor are reduced.
However, the existing various classification-based detection systems still have some disadvantages in the identification process of the thyroid ultrasound influence. For example, ultrasound images are mainly grayscale images, and include position information and morphological information of thyroid nodules. The clinically obtained ultrasonic thyroid images have poor quality and are mainly characterized by severe speckle noise, blurred nodule edges, discontinuous boundaries and low contrast. The edge information is mainly concentrated in the high frequency domain of the image, and a large amount of noise also exists in the high frequency domain, wherein the speckle noise is the main interference noise which affects the quality of the ultrasonic image. These pose difficulties to the semantic segmentation of thyroid nodules and ultimately affect the accuracy of the predicted conclusions.
Secondly, the requirements for data processing and operation in the ultrasonic image recognition and detection technology are high, and the existing prediction method based on segmentation often needs to intervene in the image preprocessing process in order to improve the segmentation effect, so that the defect of low practicability possibly exists, and the final detection rate is also influenced. For example, in the article "diagnosis of thyroid nodules" published by Park et al in Scientific Reports: in comparison between Deep Learning Convolutional Neural Network models and radiologists (Diagnosis of Deep Learning non-products: Performance of Deep Learning Convolutional Neural Network models vs. radiologists), the idea of "high-precision segmentation-multi-feature extraction-nodule classification" is adopted to realize nodule classification. The whole body comprises three parts: (1) in the preprocessing, a nodule region is arranged in the center of an image, the attention of a full convolution network to the center region is strengthened, and nodule segmentation is carried out; (2) generating three images containing nodules with different degrees of backgrounds by using the segmentation result, and extracting characteristic information; (3) and (4) performing feature extraction on the segmented nodule image, and performing benign and malignant classification by fusing the feature information obtained in the second step. Clinical experience is fully introduced to this network, has extracted multiple characteristic, has effectively ensured the classification degree of accuracy, but this model is in order to improve the segmentation precision simultaneously, with the regional artificial setting of nodule at the image center, forms semi-automatic system, has reduced the practicality to a certain extent.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention aims to provide a thyroid nodule invasiveness prediction method based on a deep learning segmentation network, which can overcome the defects of insufficient accuracy and low detection rate of the traditional detection method.
In order to achieve the purpose, the invention provides the following technical scheme:
a thyroid nodule invasiveness prediction method based on a deep learning segmentation network comprises the following steps:
s1: preprocessing a thyroid ultrasound image obtained clinically by adopting a self-adaptive wavelet algorithm, removing image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set;
s2: constructing a main structure framework based on a deep learning segmentation network; the main structure framework comprises a generation countermeasure network model and a classification network model; the generation countermeasure model is a deep learning model based on the generation countermeasure thought and comprises a generator module and a discriminator module; the classification network model adopts a ResNet network as a baseline network;
s3: improving a generation countermeasure network model in the main structure frame; the improvement content is as follows:
s31: replacing the U-Net backbone network in the generator module with a ResNeXt network model, and simultaneously reserving a multi-scale expansion convolution module;
s32: setting a loss function, training a generative impedance network model, and enabling the dynamic game process of a generator module and a discriminator module to reach a Nash equilibrium point;
s4: utilizing a generator module after countermeasure training to carry out accurate semantic segmentation on thyroid nodules, and counting information of nodule area, aspect ratio and contour rule degree in segmentation results;
s5: carrying out binarization processing on the mask output by the generator module, and multiplying the mask by the original image to obtain image information of a nodule area cut out according to the mask, thereby obtaining a new image data set which only contains nodules and is subjected to semantic segmentation and cutting;
s6: based on a classification network model adopting a ResNet network as a baseline network, the nonlinear expression capability of the classification network model is improved, and the improvement content is as follows:
s61: adding an activation function of an algorithm model on the basis of a main network full convolution network;
s62: increasing a channel attention mechanism in a ResNeXt module, and differentiating the weight of each feature map in the channel dimension;
s7: classifying the prediction result of the invasiveness of the thyroid nodules by using the improved classification network model, and training and updating the classification network model, wherein the classification prediction process of the classification network model comprises the following steps:
s71: inputting an original image data set containing environmental information and a new image data set containing knots obtained by generating a confrontation model into a feature extraction network Net1 and a feature extraction network Net2 in the classification network model, respectively;
s72: outputting nodule aspect ratio information through a semantic segmentation result of a generator module;
s73: features extracted by each feature extraction network are spliced through global average pooling, and then spliced with information of aspect ratio, nodule area and contour rule coefficient extracted by a generation countermeasure network, and further input into a full-connection layer for classification, and a final thyroid nodule invasiveness prediction conclusion is given; the thyroid nodule invasiveness prediction conclusion comprises three categories, namely malignant invasion, malignant non-invasion and benign nodule;
s74: and training the classification network model by adopting a dynamic learning rate and early stopping method, verifying the accuracy of the training model, and storing the model with the highest accuracy in the verification set as the final classification network model.
Further, the design process of the adaptive wavelet algorithm in the preprocessing of the ultrasound thyroid image in step S1 includes the following steps:
s11: the conventional expression for setting the wavelet threshold function is:
Figure BDA0002988179780000031
in the above formula, δ is a threshold, M is the total number of wavelet coefficients in the wavelet domain of the corresponding layer, and σ is the standard deviation of the wavelet domain noise;
s12: designing transform functions of wavelet coefficients
Figure BDA0002988179780000032
Such that when the absolute value of a wavelet coefficient w is less than or equal to a wavelet threshold δ, the coefficient is zeroed out; when the absolute value of the wavelet coefficient w is larger than delta, the wavelet coefficient is reduced to achieve the soft threshold denoising effect, and the transform function of the wavelet coefficient
Figure BDA0002988179780000033
The expression of (a) is:
Figure BDA0002988179780000034
in the above formula, δ is a threshold value, and w is a wavelet coefficient;
s13: corresponding influence factors are introduced into each decomposition layer, so that a wavelet threshold function is improved into an adaptive threshold function, and the requirement of dynamic filtering is met; wherein, the expression of the improved wavelet threshold function is as follows:
Figure BDA0002988179780000041
in the above formula, δ is the threshold value, ecRepresenting the corresponding influence factor introduced in the c decomposition layer, wherein sigma is the standard deviation of wavelet domain noise; m is the total number of wavelet coefficients in the wavelet domain of the corresponding layer;
s14: the number of wavelet decomposition layers is set to 3 layers, i.e., c ∈ [1,2,3 ].
Further, the objective function of the production countermeasure network model in the framework of the body structure is:
Figure BDA0002988179780000042
in the above formula, E (-) represents a distribution function expected value, D (-) represents a confidence of judging a picture, G (-) represents a generated picture, pdata (x) represents a real sample distribution, and Pz is a defined low-dimensional noise distribution.
Further, in step S32, the loss function in the training process of the production countermeasure network model is:
Figure BDA0002988179780000043
in the above formula, m represents the number of batch samples in training, xiRepresenting real picture data, ziRepresenting a noise variable, D (-) representing the confidence of judging the picture, and G (-) representing the generated picture.
Further, in step S4, the statistical process of the nodule area, the degree of the contour rule and the aspect ratio includes the following steps:
s41: masking the binary image with black background obtained by the generator;
s42: accurately extracting the nodule edge in the mask image through a cv2.findContours () function of opencv;
s43: obtaining a mask area S surrounded by the outline from the extracted edge information through a cv2.contourarea () function of opencv;
s44: using a cv2.arcLength () function of opencv to obtain a contour perimeter L; the degree of regularity of the nodule edge is determined by the contour rule coefficient
Figure BDA0002988179780000044
I.e., the larger λ, the more irregular the nodule edge;
s45: and calculating the horizontal external moment of the mask image through a cv2. boundinget () function to obtain the aspect ratio information of the mask.
Further, in step S62, the channel attention mechanism is implemented as follows: the feature map enters two branches after a 3 × 3 convolution block, when a channel attention mechanism is implemented, the feature map is subjected to global average pooling firstly, the pooled sizes of all the feature maps are 1 × 1, and then the feature map passes through three groups of full-connection layers FC1, FC2 and FC3, and corresponding activation functions in the three groups of full-connection layers are respectively: ReLU, SELU and SELU reduce the number of channels in output to original 1/8 in FC1, and finally enter into FC4 of the fourth full connection layer group, the number of channels is expanded to the original number when FC4 outputs, 1 × 1 feature maps corresponding to each channel in FC4 through a Sigmoid function to be a weight scalar between (0,1), and the channel attention mechanism is realized by multiplying each scalar by the original feature map.
Further, in step S71, the feature extraction network Net1 adopts the ResNet50 infrastructure, and inputs an ultrasound image containing nodules only; the feature extraction network Net2 uses the ResNet101 infrastructure to input raw nodule images that contain a large number of backgrounds.
Further, the training process of the classification network model of step S74 is as follows: based on an original image data set and a cut new data set generated by a segmentation network, firstly, utilizing ImageNet pre-training weights to respectively carry out independent classification training on Net1 and Net2, wherein aspect ratio information, nodule areas and contour rule coefficients are not added in the process; and then adding Net1, Net2, aspect ratio information, nodule area and contour rule coefficients into the classification network model for joint training.
The invention also provides a thyroid nodule invasiveness prediction system based on the deep learning segmentation network, which adopts the thyroid nodule invasiveness prediction method based on the deep learning segmentation network to realize the prediction of a nodule invasiveness conclusion in a thyroid ultrasound image, and comprises the following steps:
the preprocessing module is used for preprocessing the thyroid ultrasound image obtained clinically by adopting the method, eliminating image noise and reserving edge information of the image in a high-frequency domain;
a generation confrontation network module which comprises a generator submodule and a discriminator submodule; the generation confrontation network module performs semantic segmentation on the thyroid ultrasound image by adopting the method to obtain a nodule mask, and then extracts nodule area information, edge information and aspect ratio information; carrying out binarization processing on the mask output by the generator module, and multiplying the mask by the original image to obtain image information of a nodule area cut out according to the mask, thereby obtaining a new image data set which only contains nodules and is subjected to semantic segmentation and cutting;
the classification network module adopts the method and generates a new cut data set generated by the confrontation network model based on the original image data set and information of the aspect ratio, the nodule area and the contour rule coefficient of the segmented nodules; and classifying the prediction result of the invasiveness of the nodules in the thyroid ultrasound image to obtain a prediction conclusion of the invasiveness of the nodules.
The invention also provides a thyroid nodule invasiveness prediction terminal based on the deep learning segmentation network, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the thyroid nodule invasiveness prediction method based on the deep learning segmentation network.
The thyroid nodule invasiveness prediction method based on the deep learning segmentation network has the following beneficial effects:
the prediction method provided by the invention adopts a Two-stage diagnosis strategy of firstly dividing and then classifying. By using the generation countermeasure thought for reference, the deep learning model is improved by using the strong robustness baseline module, and the training effect of the segmentation network on the thyroid ultrasound image small sample data set is improved through the countermeasure training of the generator and the discriminator.
The method provided by the invention can effectively segment the nodule focus area in the thyroid ultrasonic image to obtain the accurate new data of the nodule semantic segmentation image; and extracting a plurality of effective classification features of the nodule according to the accurate semantic segmentation result, wherein the effective classification features comprise the nodule aspect ratio, the nodule area and the nodule edge rule coefficient. Finally, the node malignancy and aggressiveness are predicted by improving a deep parallel classification network through a channel attention mechanism; has higher prediction accuracy and sensitivity.
The image preprocessing method provided by the invention can utilize wavelet transform filtering denoising to reserve edge information in a high-frequency domain in an ultrasonic image and effectively eliminate noise. The defects of poor quality of ultrasonic thyroid images, severe speckle noise, blurred nodule edges, discontinuous boundaries and low contrast are overcome; thereby laying a data foundation for improving the accuracy of the prediction result.
In the method, the whole image is used as input in the test process, the prediction is guided by the global context information in the image, the acceleration of a general parallel computing architecture is supported, and the detection speed is improved. Meanwhile, in the network design, all the convolution kernel scale parameters in the network are 3 multiplied by 3 and 1 multiplied by 1, and the design idea of fewer convolution kernel channel parameters and deeper layers is adopted, so that the detection rate is effectively improved. The design of the deviation solving processing speed provides effective guarantee for the real-time performance of the prediction method in the actual application process.
The method provided by the invention improves the nodule segmentation accuracy rate by improving the generation of the confrontation network, does not need human intervention on the ultrasonic input image, is a full-automatic system, and has sufficient practicability.
Drawings
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a thyroid nodule aggressiveness prediction method based on a deep learning segmentation network in embodiment 1;
FIG. 2 is a schematic diagram of the generation of a countermeasure network model in example 1;
FIG. 3 is a comparison of the output mask of a thyroid nodule ultrasound image from a generator module and a real mask in example 1 (the left half of the image is the generator output mask and the right half is the real mask);
FIG. 4 is a comparison of the output mask of the generator module and the real mask of another ultrasound image of thyroid nodule in example 1 (the left half of the figure is the generator output mask and the right half is the real mask);
fig. 5 shows an image in an original data set and a new nodule image cut by semantic segmentation of a thyroid nodule ultrasound image in example 1 (in the figure, the upper half is an image in the original data set, and the lower half is a corresponding image in the new data set after cutting);
fig. 6 shows an image in the original data set and a new image of a thyroid nodule after semantic segmentation and cropping in another ultrasound image of a thyroid nodule in example 1 (the upper half of the image is an image in the original data set, and the lower half of the image is a corresponding image in the new data set after cropping);
fig. 7 shows an image in the original data set and a new image of a thyroid nodule after semantic segmentation and cropping in yet another ultrasound image of a thyroid nodule in example 1 (the upper half of the image is an image in the original data set, and the lower half of the image is a corresponding image in the new data set after cropping);
FIG. 8 is a flowchart of a method for implementing a residual attention model of a channel attention mechanism in example 1;
FIG. 9 is a diagram showing a classification network model in embodiment 1;
fig. 10 is a schematic block diagram of a thyroid nodule aggressiveness prediction system based on a deep learning segmentation network in embodiment 2.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
The embodiment provides a thyroid nodule aggressiveness prediction method based on a deep learning segmentation network, and as shown in fig. 1, the method includes the following steps:
s1: and (3) preprocessing a thyroid ultrasound image obtained clinically by adopting a self-adaptive wavelet algorithm, eliminating image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set.
The method aims at the problems of poor image quality, serious speckle noise, fuzzy nodule edge, discontinuous boundary, low contrast and concentrated edge information and serious noise in a high-frequency domain in thyroid ultrasonic detection. In this embodiment, an adaptive wavelet algorithm is used to pre-process an image. The wavelet filtering is based on wavelet transformation, and then converts signals in a spatial domain into a wavelet domain with time-frequency characteristics, and then uses a wavelet coefficient mapped by threshold value reduction noise, and further obtains a denoised image by inverse transformation. Among them, the threshold selection is the key of the wavelet filter.
The basic idea of wavelet threshold denoising is that after a signal is decomposed through wavelet transform with N layers, a wavelet coefficient generated by the signal contains important information of the signal, the wavelet coefficient of the signal after the wavelet decomposition is larger, the wavelet coefficient of noise is smaller, and the wavelet coefficient of noise is smaller than the wavelet coefficient of the signal. In this embodiment, the design idea of the adaptive wavelet algorithm is as follows:
s11: the conventional expression for setting the wavelet threshold function is:
Figure BDA0002988179780000081
in the above formula, δ is the threshold, M is the total number of wavelet coefficients in the wavelet domain of the corresponding layer, and σ is the standard of wavelet domain noise
Tolerance;
s12: designing transform functions of wavelet coefficients
Figure BDA0002988179780000082
Such that when the absolute value of a wavelet coefficient w is less than or equal to a wavelet threshold δ, the coefficient is zeroed out; when the absolute value of the wavelet coefficient w is larger than delta, the wavelet coefficient is reduced to achieve the soft threshold denoising effect, and the transform function of the wavelet coefficient
Figure BDA0002988179780000083
The expression of (a) is:
Figure BDA0002988179780000084
in the above formula, δ is a threshold value, and w is a wavelet coefficient;
s13: corresponding influence factors are introduced into each decomposition layer, so that a wavelet threshold function is improved into an adaptive threshold function, and the requirement of dynamic filtering is met; wherein, the expression of the improved wavelet threshold function is as follows:
Figure BDA0002988179780000091
in the above formula, δ is the threshold value, ecRepresenting the corresponding influence factor introduced in the c decomposition layer, wherein sigma is the standard deviation of wavelet domain noise; m is the total number of wavelet coefficients in the wavelet domain of the corresponding layer;
s14: the number of wavelet decomposition layers is set to 3 layers, i.e., c ∈ [1,2,3 ].
S2: constructing a main structure framework based on a deep learning segmentation network; the Two-stage diagnostic strategy was used in the examples. Firstly, carrying out nodule instance segmentation on the thyroid ultrasound image by utilizing a trained generated confrontation network to obtain a nodule mask, and further extracting nodule area information, edge information, aspect ratio information and the like. Therefore, image data with different sizes and additional medical criterion information can be obtained, and further effective prediction of the invasiveness of the thyroid nodules is achieved through the improved classification network. Thus, the subject structural framework in this embodiment includes generating a countermeasure network model and a classification network model; the generation countermeasure model is a deep learning model based on the generation countermeasure thought and comprises a generator module and a discriminator module; the classification network model adopts a ResNet network as a baseline network;
generative confrontation networks are a deep learning model. The model passes through two modules in the framework: the mutual game learning of the generative model and the discriminative model yields superior output. Wherein the objective function for generating the countermeasure network is:
Figure BDA0002988179780000092
in the above formula, E (-) represents the expected value of the distribution function, D (-) represents the confidence of the judgment picture, G (-) represents the generated picture, P (-) represents the expected value of the distribution functiondata(x)Representing the true sample distribution, Pz is the defined low dimensional noise distribution.
The core idea of generating the countermeasure network model based on the idea in this embodiment is that the discriminator module distinguishes a real sample from a false sample, and gives a high score of 1 for the real sample as much as possible, and a low score of 0 for the false sample as much as possible; the generator module spoofs the discriminator module, generating spurious data so that the discriminator module can give a score of 1 as high as possible. In this embodiment, the thyroid ultrasound image is semantically segmented by using the concept of generating confrontation, and a specific network model structure is shown in fig. 2.
S3: improving a generation countermeasure network model in the main structure frame; the improvement content is as follows:
s31: replacing the U-Net backbone network in the generator module with a ResNeXt network model, and simultaneously reserving a multi-scale expansion convolution module; the ResNeXt network has stronger characteristic extraction capability, so that the improved network model can achieve higher detection precision under the condition of the same parameter quantity; and meanwhile, the multi-scale expansion convolution module is reserved, so that the advantages of expansion convolution can be reserved in the model.
S32: setting a loss function, training a generative impedance network model, and enabling the dynamic game process of a generator module and a discriminator module to reach a Nash equilibrium point; the generator module and the discriminator module in the generation countermeasure network model form a dynamic game process, and the final balance point is the Nash balance point. In order to make the generator and the arbiter operate normally, the setting of the loss function is crucial, and in this embodiment, the loss function in the training process of the production countermeasure network model is as follows:
Figure BDA0002988179780000101
in the above formula, m represents the number of batch samples in training, xiRepresenting real picture data, ziRepresenting a noise variable, D (-) representing the confidence of judging the picture, and G (-) representing the generated picture.
During the training process, the optimization aims to be as follows: the generated picture is real as much as possible, and the discriminator judges whether the picture is generated or not as accurately as possible.
In this embodiment, fig. 3 and fig. 4 respectively show a comparison graph of a mask output by the generator module and a real mask after semantic segmentation is performed on two different thyroid nodule images, and it can be found from the comparison graph that the similarity between the mask output by the generator module and the real mask is extremely high, so that it can be considered that the training process in this example achieves the optimization goal.
S4: utilizing a generator module after countermeasure training to carry out accurate semantic segmentation on thyroid nodules, and counting information of nodule area, aspect ratio and contour rule degree in segmentation results;
specifically, the statistical process of the nodule area, the contour rule degree and the aspect ratio in this embodiment includes the following steps:
s41: masking the binary image with black background obtained by the generator;
s42: accurately extracting the nodule edge in the mask image through a cv2.findContours () function of opencv;
s43: obtaining a mask area S surrounded by the outline from the extracted edge information through a cv2.contourarea () function of opencv;
s44: using a cv2.arcLength () function of opencv to obtain a contour perimeter L; the degree of regularity of the nodule edge is determined by the contour rule coefficient
Figure BDA0002988179780000111
I.e., the larger λ, the more irregular the nodule edge;
s45: and calculating the horizontal external moment of the mask image through a cv2. boundinget () function to obtain the aspect ratio information of the mask.
At this time, the present embodiment has completely acquired the information of the area, aspect ratio and contour rule coefficient of the nodule in the ultrasound image.
S5: and carrying out binarization processing on the mask output by the generator module, and multiplying the mask by the original image to obtain the image information of the nodule area cut out according to the mask, thereby obtaining a new image data set which only contains the nodules and is subjected to semantic segmentation and cutting.
Fig. 5 to 7 show a comparison between the original data set of the three nodule example images and the clipped new image data set, respectively, where the upper part of the comparison shows the original image containing a large amount of environmental information, and the lower part of the comparison shows only the image information of the nodule.
S6: based on a classification network model adopting a ResNet network as a baseline network, the nonlinear expression capability of the classification network model is improved, and the improvement content is as follows:
s61: adding an activation function of an algorithm model on the basis of a main network full convolution network; the nonlinearity of the model can be increased, and the purpose of deeply mining effective characteristic information in the ultrasonic image in high-dimensional information is achieved.
S62: increasing a channel attention mechanism in a ResNeXt module, and differentiating the weight of each feature map in the channel dimension; thereby increasing the nonlinear expression capability of the network in the channel dimension. Specifically, the channel attention mechanism is realized by the following method: as shown in fig. 8, the feature map will enter two branches after the 3 × 3 convolution block, and when the channel attention mechanism is implemented, the feature map is first subjected to global average pooling, all feature maps are pooled to have a size of 1 × 1, and then pass through three sets of full-connection layers FC1, FC2, and FC3, and corresponding activation functions in the three sets of full-connection layers are respectively: ReLU, SELU and SELU reduce the number of channels in output to original 1/8 in FC1, and finally enter into FC4 of the fourth full connection layer group, the number of channels is expanded to the original number when FC4 outputs, 1 × 1 feature maps corresponding to each channel in FC4 through a Sigmoid function to be a weight scalar between (0,1), and the channel attention mechanism is realized by multiplying each scalar by the original feature map. Through the multilayer different activation functions, the weight values corresponding to the characteristic diagrams of the channels are obtained through mapping, and the weight values are multiplied by the activation functions to finally realize the weight differentiation of different channels, namely, a channel attention mechanism is realized, and the nonlinear expression capability of the network is effectively improved.
S7: classifying the prediction result of the invasiveness of the thyroid nodules by using the improved classification network model, and training and updating the classification network model, wherein the classification prediction process of the classification network model comprises the following steps:
s71: inputting an original image data set containing environmental information and a new image data set containing knots obtained by generating a confrontation model into a feature extraction network Net1 and a feature extraction network Net2 in the classification network model, respectively; the feature extraction network Net1 adopts a ResNet50 infrastructure, and inputs an ultrasonic image only containing nodules; the feature extraction network Net2 uses the ResNet101 infrastructure to input raw nodule images that contain a large number of backgrounds.
S72: calculating the nodule aspect ratio, the nodule area and the contour rule coefficient through the semantic segmentation result of the generator module;
s73: features extracted by each feature extraction network are spliced through global average pooling, and then spliced with information of aspect ratio, nodule area and contour rule coefficient extracted by a generation countermeasure network, and further input into a full-connection layer for classification, and a final thyroid nodule invasiveness prediction conclusion is given; the thyroid nodule invasiveness prediction conclusion includes three categories of malignant invasion, malignant non-invasion and benign nodule.
A schematic diagram of the overall classification network is shown in fig. 9.
S74: and training the classification network model by adopting a dynamic learning rate and early stopping method, verifying the accuracy of the training model, and storing the model with the highest accuracy in the verification set as the final classification network model. The training process of the classification network model is as follows: the classification network training adopts ImageNet pre-training weight, based on the original image data set and the cut new data set generated by the segmentation network, firstly, independent classification training is respectively carried out on Net1 and Net2 by utilizing the ImageNet pre-training weight, and aspect ratio information, nodule area and contour rule coefficient are not added in the process; and then adding Net1, Net2, aspect ratio, nodule area and contour rule coefficients into the classification network model for joint training.
The classification test results of the classification network model in the test results are shown in the following table:
table 1: classifying test results of classified network model in test results
Figure BDA0002988179780000121
Figure BDA0002988179780000131
The data in table 1 were analyzed, and in the test experiments, the accuracy of the classification network was on average 86.37%, the specificity was 83.49%, and the sensitivity was 89.5%. Therefore, the prediction method provided by the embodiment has relatively high accuracy and sensitivity, and the specificity of the classification result is also high.
Example 2
As shown in fig. 10, this embodiment provides a thyroid nodule invasiveness prediction system based on a deep learning segmentation network, which uses the thyroid nodule invasiveness prediction method based on the deep learning segmentation network in embodiment 1 to realize prediction of a nodule invasiveness conclusion in a thyroid ultrasound image, and the system includes:
a preprocessing module, configured to preprocess a clinically obtained thyroid ultrasound image, remove image noise, and retain edge information of the image in a high-frequency domain according to the method in embodiment 1;
a generation confrontation network module which comprises a generator submodule and a discriminator submodule; the generation confrontation network module performs semantic segmentation on the thyroid ultrasound image by adopting the method in the embodiment 1 to obtain a nodule mask, and then extracts nodule area information, edge information and aspect ratio information; carrying out binarization processing on the mask output by the generator module, and multiplying the mask by the original image to obtain image information of a nodule area cut out according to the mask, thereby obtaining a new image data set which only contains nodules and is subjected to semantic segmentation and cutting;
a classification network module which adopts the method in embodiment 1 and generates a new cut data set generated by a confrontation network model based on an original image data set and information of aspect ratio, nodule area and contour rule coefficient of the segmented nodules; and classifying the prediction result of the invasiveness of the nodules in the thyroid ultrasound image to obtain a prediction conclusion of the invasiveness of the nodules.
Example 3
The embodiment provides a thyroid nodule aggressiveness prediction terminal based on a deep learning segmentation network, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the thyroid nodule aggressiveness prediction method based on the deep learning segmentation network according to the embodiment 1.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A thyroid nodule aggressiveness prediction method based on a deep learning segmentation network is characterized by comprising the following steps:
s1: preprocessing a thyroid ultrasound image obtained clinically by adopting a self-adaptive wavelet algorithm, removing image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set;
s2: constructing a main structure framework based on a deep learning segmentation network; the main structure framework comprises a generation countermeasure network model and a classification network model; the generated confrontation model is a deep learning model based on a generated confrontation thought, and comprises a generator module and a discriminator module; the classification network model adopts a ResNet network as a baseline network;
s3: improving a generation countermeasure network model in the main structure frame; the improvement content is as follows:
s31: replacing the U-Net backbone network in the generator module with a ResNeXt network model, and simultaneously reserving a multi-scale expansion convolution module;
s32: setting a loss function, training a generative impedance network model, and enabling the dynamic game process of a generator module and a discriminator module to reach a Nash equilibrium point;
s4: utilizing a generator module after countermeasure training to carry out accurate semantic segmentation on thyroid nodules, and counting information of nodule area, aspect ratio and contour rule degree in segmentation results;
s5: carrying out binarization processing on the mask output by the generator module, and multiplying the mask by the original image to obtain image information of a nodule area cut out according to the mask, thereby obtaining a new image data set which only contains nodules and is subjected to semantic segmentation and cutting;
s6: based on a classification network model adopting a ResNet network as a baseline network, the nonlinear expression capability of the classification network model is improved, and the improvement content is as follows:
s61: adding an activation function of an algorithm model on the basis of a main network full convolution network;
s62: increasing a channel attention mechanism in a ResNeXt module, and differentiating the weight of each feature map in the channel dimension;
s7: classifying the prediction result of the invasiveness of the thyroid nodules by using the improved classification network model, and training and updating the classification network model, wherein the classification prediction process of the classification network model comprises the following steps:
s71: inputting the original image data set containing the environmental information and the new image data set containing only the knots obtained by the generation countermeasure model into a feature extraction network Net1 and a feature extraction network Net2 in a classification network model, respectively;
s72: outputting nodule area, aspect ratio and contour rule coefficient through semantic segmentation result of a generator;
s73: features extracted by each feature extraction network are spliced through global average pooling, and then spliced with information of aspect ratio, nodule area and contour rule coefficient extracted by a generation countermeasure network, and further input into a full-connection layer for classification, and a final thyroid nodule invasiveness prediction conclusion is given; the thyroid nodule invasiveness prediction conclusion comprises three categories, namely malignant invasion, malignant non-invasion and benign nodule;
s74: and training the classification network model by adopting a dynamic learning rate and early stopping method, verifying the accuracy of the training model, and storing the model with the highest accuracy in the verification set as the final classification network model.
2. The thyroid nodule aggressiveness prediction method based on the deep learning segmentation network as claimed in claim 1, wherein: the design process of the adaptive wavelet algorithm in the method for preprocessing the ultrasonic thyroid image in the step S1 is as follows:
s11: the conventional expression for setting the wavelet threshold function is:
Figure FDA0002988179770000021
in the above formula, δ is a threshold, M is the total number of wavelet coefficients in the wavelet domain of the corresponding layer, and σ is the standard deviation of the wavelet domain noise;
s12: designing transform functions of wavelet coefficients
Figure FDA0002988179770000022
Such that when the absolute value of a wavelet coefficient w is less than or equal to a wavelet threshold δ, the coefficient is zeroed out; when the absolute value of the wavelet coefficient w is larger than delta, the wavelet coefficient is reduced to achieve the soft threshold denoising effect, and the transform function of the wavelet coefficient
Figure FDA0002988179770000023
The expression of (a) is:
Figure FDA0002988179770000024
in the above formula, δ is a threshold value, and w is a wavelet coefficient;
s13: corresponding influence factors are introduced into each decomposition layer, so that a wavelet threshold function is improved into an adaptive threshold function, and the requirement of dynamic filtering is met; wherein, the expression of the improved wavelet threshold function is as follows:
Figure FDA0002988179770000025
in the above formula, δ is the threshold value, ecRepresenting the corresponding influence factor introduced in the c decomposition layer, wherein sigma is the standard deviation of wavelet domain noise; m is the total number of wavelet coefficients in the wavelet domain of the corresponding layer;
s14: the number of wavelet decomposition layers is set to 3 layers, i.e., c ∈ [1,2,3 ].
3. The thyroid nodule aggressiveness prediction method based on the deep learning segmentation network as claimed in claim 2, wherein: the objective function of the production countermeasure network model in the main structure framework is as follows:
Figure FDA0002988179770000031
in the above formula, E (-) represents a distribution function expected value, D (-) represents a confidence of judging a picture, G (-) represents a generated picture, pdata (x) represents a real sample distribution, and Pz is a defined low-dimensional noise distribution.
4. The thyroid nodule aggressiveness prediction method based on the deep learning segmentation network as claimed in claim 3, wherein: in step S32, the loss function in the training process of the production countermeasure network model is:
Figure FDA0002988179770000032
in the above formula, m represents the number of batch samples in training, xiRepresenting real picture data, ziRepresenting a noise variable, D (-) representing the confidence of judging the picture, and G (-) representing the generated picture.
5. The thyroid nodule aggressiveness prediction method based on the deep learning segmentation network as claimed in claim 1, wherein: in step S4, the statistical process of nodule area, contour rule degree and aspect ratio includes the following steps:
s41: masking the binary image with black background obtained by the generator;
s42: accurately extracting the nodule edge in the mask image through a cv2.findContours () function of opencv;
s43: obtaining a mask area S surrounded by the outline from the extracted edge information through a cv2.contourarea () function of opencv;
s44: using a cv2.arcLength () function of opencv to obtain a contour perimeter L; the degree of regularity of the nodule edge is determined by the contour rule coefficient
Figure FDA0002988179770000033
I.e., the larger λ, the more irregular the nodule edge;
s45: and calculating the horizontal external moment of the mask image through a cv2. boundinget () function to obtain the aspect ratio information of the mask.
6. The thyroid nodule aggressiveness prediction method based on the deep learning segmentation network as claimed in claim 1, wherein: in step S62, the channel attention mechanism is implemented as follows: the feature map enters two branches after a 3 × 3 convolution block, when a channel attention mechanism is implemented, the feature map is subjected to global average pooling firstly, the pooled sizes of all the feature maps are 1 × 1, and then the feature map passes through three groups of full-connection layers FC1, FC2 and FC3, and corresponding activation functions in the three groups of full-connection layers are respectively: ReLU, SELU and SELU reduce the number of channels in output to original 1/8 in FC1, and finally enter into FC4 of the fourth full connection layer group, the number of channels is expanded to the original number when FC4 outputs, 1 × 1 feature maps corresponding to each channel in FC4 through a Sigmoid function to be a weight scalar between (0,1), and the channel attention mechanism is realized by multiplying each scalar by the original feature map.
7. The thyroid nodule aggressiveness prediction method based on the deep learning segmentation network as claimed in claim 1, wherein: in the step S71, the feature extraction network Net1 adopts a ResNet50 infrastructure, and inputs an ultrasound image containing nodules only; the feature extraction network Net2 uses the ResNet101 infrastructure to input raw nodule images that contain a large number of backgrounds.
8. The thyroid nodule aggressiveness prediction method based on the deep learning segmentation network as claimed in claim 1, wherein: the training process of the classification network model of step S74 is as follows: based on an original image data set and a cut new data set generated by a segmentation network, firstly, utilizing ImageNet pre-training weights to respectively carry out independent classification training on Net1 and Net2, wherein aspect ratio information, nodule area and contour rule coefficients are not added in the process; and then adding Net1, Net2, aspect ratio information, nodule area and contour rule coefficients into the classification network model for joint training.
9. A thyroid nodule invasiveness prediction system based on a deep learning segmentation network, which is characterized in that the thyroid nodule invasiveness prediction method based on the deep learning segmentation network as claimed in any one of claims 1 to 8 is adopted to realize the prediction of a nodule invasiveness conclusion in a thyroid ultrasound image, and the system comprises:
the preprocessing module is used for preprocessing a thyroid ultrasound image obtained clinically, eliminating image noise and keeping edge information of the image in a high-frequency domain;
a generation confrontation network module which comprises a generator submodule and a discriminator submodule; the generation confrontation network module is used for carrying out semantic segmentation on the nodule examples of the thyroid ultrasound images to obtain nodule masks, and further extracting nodule area information, edge information and aspect ratio information; carrying out binarization processing on the mask output by the generator module, and multiplying the mask by the original image to obtain image information of a nodule area cut out according to the mask, thereby obtaining a new image data set which only contains nodules and is subjected to semantic segmentation and cutting;
the classification network module is used for generating a new cut data set generated by the confrontation network model based on the original image data set and information of the aspect ratio, the nodule area and the contour rule coefficient of the segmented nodules; and classifying the prediction result of the invasiveness of the nodules in the thyroid ultrasound image to obtain a prediction conclusion of the invasiveness of the nodules.
10. A thyroid nodule aggressiveness prediction terminal based on a deep learning segmentation network, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that: the processor executes the thyroid nodule aggressiveness prediction method based on the deep learning segmentation network according to any one of claims 1 to 8.
CN202110307664.2A 2021-03-23 2021-03-23 Thyroid nodule invasiveness prediction method based on deep learning segmentation network Active CN112950615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110307664.2A CN112950615B (en) 2021-03-23 2021-03-23 Thyroid nodule invasiveness prediction method based on deep learning segmentation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110307664.2A CN112950615B (en) 2021-03-23 2021-03-23 Thyroid nodule invasiveness prediction method based on deep learning segmentation network

Publications (2)

Publication Number Publication Date
CN112950615A CN112950615A (en) 2021-06-11
CN112950615B true CN112950615B (en) 2022-03-04

Family

ID=76228057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110307664.2A Active CN112950615B (en) 2021-03-23 2021-03-23 Thyroid nodule invasiveness prediction method based on deep learning segmentation network

Country Status (1)

Country Link
CN (1) CN112950615B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658108A (en) * 2021-07-22 2021-11-16 西南财经大学 Glass defect detection method based on deep learning
CN117333435A (en) * 2023-09-15 2024-01-02 什维新智医疗科技(上海)有限公司 Thyroid nodule boundary definition detection method, thyroid nodule boundary definition detection system, electronic equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472757A (en) * 2018-11-15 2019-03-15 央视国际网络无锡有限公司 It is a kind of that logo method is gone based on the image for generating confrontation neural network
CN110060774A (en) * 2019-04-29 2019-07-26 赵蕾 A kind of thyroid nodule recognition methods based on production confrontation network
CN111291683A (en) * 2020-02-08 2020-06-16 内蒙古大学 Dairy cow individual identification system based on deep learning and identification method thereof
CN111598892A (en) * 2020-04-16 2020-08-28 浙江工业大学 Cell image segmentation method based on Res2-uneXt network structure
CN112085735A (en) * 2020-09-28 2020-12-15 西安交通大学 Aluminum image defect detection method based on self-adaptive anchor frame
CN112150493A (en) * 2020-09-22 2020-12-29 重庆邮电大学 Semantic guidance-based screen area detection method in natural scene
CN112529894A (en) * 2020-12-22 2021-03-19 徐州医科大学 Thyroid nodule diagnosis method based on deep learning network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230294B (en) * 2017-06-14 2020-09-29 北京市商汤科技开发有限公司 Image detection method, image detection device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472757A (en) * 2018-11-15 2019-03-15 央视国际网络无锡有限公司 It is a kind of that logo method is gone based on the image for generating confrontation neural network
CN110060774A (en) * 2019-04-29 2019-07-26 赵蕾 A kind of thyroid nodule recognition methods based on production confrontation network
CN111291683A (en) * 2020-02-08 2020-06-16 内蒙古大学 Dairy cow individual identification system based on deep learning and identification method thereof
CN111598892A (en) * 2020-04-16 2020-08-28 浙江工业大学 Cell image segmentation method based on Res2-uneXt network structure
CN112150493A (en) * 2020-09-22 2020-12-29 重庆邮电大学 Semantic guidance-based screen area detection method in natural scene
CN112085735A (en) * 2020-09-28 2020-12-15 西安交通大学 Aluminum image defect detection method based on self-adaptive anchor frame
CN112529894A (en) * 2020-12-22 2021-03-19 徐州医科大学 Thyroid nodule diagnosis method based on deep learning network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Fine-Grained Detection of Driver Distraction Based on Neural Architecture Search;Jie Chen等;《IEEE Transactions on Intelligent Transportation Systems》;20210210;第22卷(第9期);5783-5801 *
Generative Adversarial Network Using Multi-modal Guidance for Ultrasound Images Inpainting;Ruiguo Yu等;《ICONIP 2020: Neural Information Processing》;20201119;338-349 *
基于密集层和注意力机制的快速语义分割;程晓悦等;《计算机工程》;20190713;第46卷(第4期);247-252+259 *
基于改进YOLOv3的高压输电线路关键部件检测方法;翁智等;《计算机应用》;20201231;第40卷(第S2期);83-187 *
生成对抗网络在肝结节分类中的应用;金川皓等;《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》;20191215(第12期);E064-26 *

Also Published As

Publication number Publication date
CN112950615A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN112927217B (en) Thyroid nodule invasiveness prediction method based on target detection
CN106875395B (en) Super-pixel-level SAR image change detection method based on deep neural network
CN111553837B (en) Artistic text image generation method based on neural style migration
CN108154519A (en) Dividing method, device and the storage medium of eye fundus image medium vessels
CN112950615B (en) Thyroid nodule invasiveness prediction method based on deep learning segmentation network
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
CN112116593A (en) Domain self-adaptive semantic segmentation method based on Gini index
CN113592894B (en) Image segmentation method based on boundary box and co-occurrence feature prediction
CN116030396B (en) Accurate segmentation method for video structured extraction
CN114529516A (en) Pulmonary nodule detection and classification method based on multi-attention and multi-task feature fusion
CN115880495A (en) Ship image target detection method and system under complex environment
CN110852199A (en) Foreground extraction method based on double-frame coding and decoding model
CN113592893A (en) Image foreground segmentation method combining determined main body and refined edge
CN113673396A (en) Spore germination rate calculation method and device and storage medium
CN117392375A (en) Target detection algorithm for tiny objects
CN116778164A (en) Semantic segmentation method for improving deep V < 3+ > network based on multi-scale structure
CN112884773B (en) Target segmentation model based on target attention consistency under background transformation
CN115775226A (en) Transformer-based medical image classification method
CN112070009B (en) Convolutional neural network expression recognition method based on improved LBP operator
CN112215868B (en) Method for removing gesture image background based on generation of countermeasure network
CN117315702B (en) Text detection method, system and medium based on set prediction
CN116486203B (en) Single-target tracking method based on twin network and online template updating
CN116630970A (en) Rapid high-precision cell identification and segmentation method
Xiao Color Texture Image Recognition based on Deep Learning
Khan et al. Recognition Of Hand Gesture Using CNN for American Sign Language

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant