CN112927217A - Thyroid nodule invasiveness prediction method based on target detection - Google Patents

Thyroid nodule invasiveness prediction method based on target detection Download PDF

Info

Publication number
CN112927217A
CN112927217A CN202110307648.3A CN202110307648A CN112927217A CN 112927217 A CN112927217 A CN 112927217A CN 202110307648 A CN202110307648 A CN 202110307648A CN 112927217 A CN112927217 A CN 112927217A
Authority
CN
China
Prior art keywords
nodule
positioning
network model
network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110307648.3A
Other languages
Chinese (zh)
Other versions
CN112927217B (en
Inventor
郑志强
陈家瑞
翁智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia Shuke Health Industry Co.,Ltd.
Original Assignee
Inner Mongolia University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University filed Critical Inner Mongolia University
Priority to CN202110307648.3A priority Critical patent/CN112927217B/en
Publication of CN112927217A publication Critical patent/CN112927217A/en
Application granted granted Critical
Publication of CN112927217B publication Critical patent/CN112927217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pathology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a thyroid nodule invasiveness prediction method based on target detection. The method comprises the following steps: s1: preprocessing a thyroid ultrasound image obtained clinically to obtain an original data set; s2: constructing a positioning network model based on a network structure of the traditional fast RCNN; pre-training the positioning network model; s3: extracting the nodule shape information in the ultrasonic image by using a positioning network; obtaining the aspect ratio information of the nodule and the context information of the glandular tissue; s4: constructing a classification network model; s5: establishing a multi-model fused thyroid nodule invasiveness prediction network, and predicting the thyroid nodule invasiveness in the ultrasonic image; s6: and training and updating the classification network model in the converged network model, and storing the model with the highest accuracy in the verification set. The method can realize end-to-end full-automatic auxiliary diagnosis and overcome the defects of insufficient accuracy and low detection rate of the traditional method.

Description

Thyroid nodule invasiveness prediction method based on target detection
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a thyroid nodule invasiveness prediction method based on target detection.
Background
The thyroid gland is the largest endocrine gland of a human body, the ultrasonic examination can qualitatively and quantitatively estimate the size, the volume and the blood flow of the thyroid gland, and qualitative or semi-quantitative diagnosis can be carried out on benign and malignant tumors, so that the ultrasonic detection method also becomes the preferred method for image examination of thyroid diseases. The traditional thyroid ultrasonic detection result is mainly judged by a doctor according to experience, and conclusion prediction is made. After the image recognition technology is introduced, various detection systems based on classification can replace manual work to process and predict ultrasonic image data, so that the detection efficiency of images is greatly improved; meanwhile, the image recognition processing technology and the manual experience are combined, a computer gives out a preliminary prediction conclusion, and then a doctor rechecks the conclusion, so that the misjudgment rate and the workload of the doctor are reduced.
However, the existing various classification-based detection systems still have some disadvantages in the identification process of the thyroid ultrasound influence. For example, the existing thyroid nodule property classification detection method does not take into account the characteristic information of the nodule and the context information of the glandular tissue; this has a non-negligible effect on the reliability of the final prediction conclusions: liu et al, in Medical Image Analysis, applied to automatic detection and classification of thyroid nodules based on clinical knowledge-guided convolutional neural networks in ultrasound images, propose a method for locating and generating three different nodule images and classifying nodules by using a multiscale pyramid locating network based on ResNet-50, but the network does not introduce much Medical clinical experience. The thyroid nodule benign and malignant classification algorithm based on the fusion of deep semantic features and shallow texture features is provided in an article 'thyroid nodule cancerous change ultrasonic image diagnosis fusing deep networks and shallow texture features' published in 'Chinese image graphics newspaper' by Chikening, Zhangzhou phenanthrene, but the defect of the model is that the generation of a focus region in an ultrasonic image needs manual labeling by a doctor, namely the model has low practicability and cannot efficiently realize auxiliary diagnosis. In addition, the ultrasound image is mainly a grayscale image, and includes position information and shape information of the thyroid nodule. The clinically obtained ultrasonic thyroid images have poor quality and are mainly characterized by severe speckle noise, blurred nodule edges, discontinuous boundaries and low contrast. The edge information is mainly concentrated in the high frequency domain of the image, and a large amount of noise also exists in the high frequency domain, wherein the speckle noise is the main interference noise which affects the quality of the ultrasonic image. These pose difficulties to semantic detection of thyroid nodules and ultimately affect the accuracy of the predicted conclusions.
Secondly, the requirements for data processing and operation in the ultrasonic image recognition and detection technology are high, and the existing detection method is not designed particularly for supporting acceleration of a parallel computing architecture in the thought, so that the defect of low detection speed may exist. Meanwhile, the existing detection method does not specially design the parameters and the number of layers of the convolution channel, which also influences the final detection rate.
Disclosure of Invention
Aiming at the problems existing in the prior technical scheme, the invention aims to provide a thyroid nodule invasiveness prediction method based on target detection, which can overcome the defect that the accuracy is insufficient due to the fact that the traditional detection method is difficult to take into account the contextual information of a nodule and glandular tissues, and overcome the defect that an end-to-end full-automatic system cannot be realized by the traditional algorithm.
In order to achieve the purpose, the invention provides the following technical scheme:
a thyroid nodule invasiveness prediction method based on target detection comprises the following steps:
s1: preprocessing a thyroid ultrasound image obtained clinically by adopting a self-adaptive wavelet algorithm, removing image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set;
s2: constructing a positioning network model based on a network structure of the traditional fast RCNN; increasing a channel attention mechanism in a ResNet module in a positioning network model, and increasing the nonlinear expression capability of the network in the channel dimension by differentiating the weight of each feature map in the channel dimension; pre-training the positioning network model; the construction and training process of the positioning network model comprises the following steps:
s21: adopting fast RCNN as a positioning network, adopting ImageNet transfer learning to pre-train the network, wherein the locally trained data adopts an ultrasonic image containing nodules;
s22: the model is integrally adjusted to be a single task network with a boundary regression, and the weight of positioning loss is increased during local training;
s23: in the positioning network model, a scheme of constructing a multi-scale pyramid is adopted, and the positions of thyroid nodule targets are predicted simultaneously in different layers by utilizing bottom-low-layer features and high-layer features respectively;
s24: in a positioning network model, three layers of fusion features of ResNet50+ FPN are adopted to generate an area suggestion, and a channel attention mechanism is adopted in a feature extraction module in a positioning network to ensure the network positioning effect;
s25: in the positioning network model, a characteristic pyramid is adopted for anchor point selection, and the output characteristics of a ResNet50 third residual block are abandoned;
s26: training a positioning network by utilizing a self-made data set on the basis of ImageNet pre-training weight, wherein the data used for training is corner coordinates of a minimum horizontal circumscribed rectangle corresponding to a true mask, and the evaluation standard of a positioning network model adopts the dice coefficients of a real rectangular frame and a prediction rectangular frame;
s3: adjusting the network model so as to extract the nodule form information in the ultrasonic image by using a positioning network; obtaining the aspect ratio information of the nodule by using the rectangular coordinate points output by the positioning network;
s4: constructing a classification network model, wherein the classification network model adopts a ResNet network as a baseline network, and comprises parallel feature extraction networks Net1 and Net 2; meanwhile, a channel attention mechanism is added in a ResNet module in a classification network model, and the nonlinear expression capability of the network in the channel dimension is increased by differentiating the weight of each feature map in the channel dimension;
s5: establishing a multi-model fused thyroid nodule aggressiveness prediction network, wherein the fused network model has the following prediction processing process on thyroid nodule aggressiveness in an ultrasonic image:
s51: the high-precision positioning of the nodules in the ultrasonic image through the positioning network model obtains two different region images containing the nodules, which are respectively as follows: (1) images containing only nodules; (2) contains a large number of surrounding tissue information images in addition to the nodule;
s52: respectively inputting images only containing nodules and images containing a large amount of tissue information besides the nodules into parallel feature extraction networks Net1 and Net 2;
s53: outputting the aspect ratio information of the positioned nodules by using a prediction rectangular box in the positioning network model;
s54: performing feature splicing on the nodule features extracted by the feature extraction networks Net1 and Net2 and the context features in a global average pooling mode, and then splicing the feature splicing information with the aspect ratio information extracted by the positioning network model;
s55: inputting the complete information spliced in the previous step into a full-connection layer in a classification network model for classification to obtain prediction conclusions of malignant invasion, malignant non-invasion or benign nodules;
s6: and training and updating the classification network model in the converged network model, and storing the model with the highest accuracy in the verification set.
Further, the design process of the adaptive wavelet algorithm in the method for preprocessing the ultrasound thyroid image in step S1 is as follows:
s11: the conventional expression for setting the wavelet threshold function is:
Figure BDA0002988180010000041
in the above formula, D is a threshold, M is the total number of wavelet coefficients in the wavelet domain of the corresponding layer, and σ is the standard deviation of wavelet domain noise;
s12: designing transform functions of wavelet coefficients
Figure BDA0002988180010000042
Such that when the absolute value of the wavelet coefficient w is less than or equal to the wavelet threshold D, the coefficient is zeroed out; when the absolute value of the wavelet coefficient w is larger than D, the wavelet coefficient is reduced to achieve the soft threshold denoising effect, and the transform function of the wavelet coefficient
Figure BDA0002988180010000043
The expression of (a) is:
Figure BDA0002988180010000044
in the above formula, D is a threshold value, and w is a wavelet coefficient;
s13: corresponding influence factors are introduced into each decomposition layer, so that a wavelet threshold function is improved into an adaptive threshold function, and the requirement of dynamic filtering is met; wherein, the expression of the improved wavelet threshold function is as follows:
Figure BDA0002988180010000045
in the above formula, D is a threshold value, ecRepresenting the corresponding influence factor introduced in the c decomposition layer, wherein sigma is the standard deviation of wavelet domain noise; and M is the total number of wavelet coefficients in the wavelet domain of the corresponding layer.
Further, in step S13, the number of wavelet decomposition layers c is set to 3 layers, i.e., c ∈ [1,2,3 ].
Further, in step S21, the data generation method of the local training includes: in generating the xml file required for local training, only a set of diagonal coordinates of the nodule horizontal external moment is added, and no category information is added.
Further, in step S22, the localization loss weight in the network loss function is 1, i.e., only the localization loss function is retained.
Further, the channel attention mechanism in step S2 and step S4 is implemented by adopting a residual attention module in the block, and the processing method of the residual attention module includes the following processes:
the feature map enters two branches after being processed by a 3 × 3 convolution block, when a channel attention mechanism is implemented, the feature map is subjected to global average pooling firstly, the pooled sizes of all the feature maps are 1 × 1, and then the feature map passes through three groups of full-connection layers FC1, FC2 and FC3, and corresponding activation functions in the three groups of full-connection layers are respectively: ReLU, SELU and SELU reduce the number of channels in output to original 1/8 in FC1, and finally enter into FC4 of the fourth full connection layer group, the number of channels is expanded to the original number when FC4 outputs, 1 × 1 feature maps corresponding to each channel in FC4 through a Sigmoid function to be a weight scalar between (0,1), and the channel attention mechanism is realized by multiplying each scalar by the original feature map.
Further, in step S3, the context information about the glandular tissue acquired in the positioning network includes context information about the nodule and the surrounding tissue region, and in the classification network, a parallel two-way network is adopted to input images containing different degrees of surrounding tissue information, and feature information extraction is enhanced for the nodule itself and the nodule surrounding tissue; the context information about the glandular tissues acquired in the positioning network comprises three levels, namely a context relationship inside a target, a context relationship between the target and a relationship between the target and a scene; the three are embodied in the image as follows: the thyroid nodule has asymmetric size, the left and right thyroid lobes often appear in pairs, and the thyroid nodule has fixed size and position in the isthmus.
Further, in step S6, the training process of the classification network model is completed by using a pyrrch frame, the classification network model is trained by using ImageNet pre-training weights, and the training stage is completed by using a dynamic learning rate and an early-stop method;
when training a classification network model, according to an original data set and a cutting data set generated by a positioning network model, firstly, utilizing ImageNet pre-training weights to respectively carry out independent classification training on Net1 and Net2, wherein aspect ratio information is not added in the process; and then adding Net1, Net2 and aspect ratio information to the classification network model for joint training.
The invention also provides a thyroid nodule invasiveness prediction system based on target detection, which adopts the thyroid nodule invasiveness prediction method based on target detection to realize the prediction of a nodule invasiveness conclusion in a thyroid ultrasound image, and comprises the following steps:
the device comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for preprocessing a thyroid ultrasound image obtained clinically, eliminating image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set;
the positioning network module is used for accurately positioning the thyroid nodule in the image of the original data set so as to obtain a new image data set only containing the nodule and a new image data set containing the nodule and surrounding tissue information; meanwhile, calculating and extracting the nodule aspect ratio information;
the classification network module is used for positioning a new data set generated by the network module after cutting based on the original image data set and positioning the aspect ratio information output by the network module; considering both the nodule areas and the background tissue areas under different visual angles, fusing and multiplexing the features under different visual angles through FPN, and simultaneously obtaining the aspect ratio and the context information by utilizing nodule positioning; and classifying the prediction result of the invasiveness of the nodules in the thyroid ultrasound image to obtain a prediction conclusion of the invasiveness of the nodules.
The invention also provides a thyroid nodule invasiveness prediction terminal based on target detection, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the thyroid nodule invasiveness prediction method based on target detection.
The thyroid nodule invasiveness prediction method based on target detection provided by the invention has the following beneficial effects:
the prediction method provided by the invention adopts a two-stage diagnosis strategy of 'positioning + classification', improves the existing network architecture, also efficiently utilizes a positioning network to extract additional high-correlation medical criteria, considers the feature information of the nodule and the context information of the glandular tissue, realizes high-precision nodule diagnosis and prediction, and improves the reliability of a prediction conclusion.
The method provided by the invention can accurately position the thyroid nodule, so as to obtain new data only containing the nodule and calculate and extract the nodule aspect ratio information. The network model can give consideration to nodule regions and background tissue regions under different visual angles, the features under different visual angles are fused and multiplexed through the FPN, and meanwhile, the aspect ratio and the context information are obtained by utilizing nodule positioning for accurate classification; and therefore has higher prediction accuracy and sensitivity.
The image preprocessing method provided by the invention can utilize wavelet transform filtering denoising to reserve edge information in a high-frequency domain in an ultrasonic image and effectively eliminate noise. The defects of poor quality of ultrasonic thyroid images, severe speckle noise, blurred nodule edges, discontinuous boundaries and low contrast are overcome; thereby laying a data foundation for improving the accuracy of the prediction result.
In the method, the whole image is used as input in the test process, the prediction is guided by the global context information in the image, the acceleration of a general parallel computing architecture is supported, and the detection speed is improved. Meanwhile, in the network design, all the convolution kernel scale parameters in the network are 3 multiplied by 3 and 1 multiplied by 1, and the design idea of fewer convolution kernel channel parameters and deeper layers is adopted, so that the detection rate is effectively improved. The design of the deviation solving processing speed provides effective guarantee for the real-time performance of the prediction method in the actual application process.
Drawings
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a thyroid nodule aggressiveness prediction method based on target detection in example 1;
FIG. 2 is a structural diagram of the fast RCNN network in embodiment 1;
FIG. 3 is a structural diagram of an FPN network in embodiment 1;
FIG. 4 is a schematic diagram of a positioning network model in embodiment 1;
FIG. 5 is a positioning display diagram of the thyroid nodule image positioning test in the positioning network model pre-training process in example 1;
FIG. 6 is a comparison chart of the localization effect of the thyroid nodule image localization test in the localization network model pre-training process in embodiment 1;
FIG. 7 is a diagram showing a classification network model in embodiment 1;
FIG. 8 is a training curve diagram in the pre-training process of the classification network model in embodiment 1;
FIG. 9 is a statistical table of classification test results in the pre-training process of the classification network model in embodiment 1;
FIG. 10 is a diagram of a residual attention module in example 1;
fig. 11 is a schematic block diagram of a thyroid nodule aggressiveness prediction system based on target detection in example 2.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
As shown in fig. 1, the present embodiment provides a thyroid nodule invasiveness prediction method based on target detection, including the following steps S1-S6:
s1: and (3) preprocessing a thyroid ultrasound image obtained clinically by adopting a self-adaptive wavelet algorithm, eliminating image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set.
The method aims at the problems of poor image quality, serious speckle noise, fuzzy nodule edge, discontinuous boundary, low contrast and concentrated edge information and serious noise in a high-frequency domain in thyroid ultrasonic detection. In this embodiment, an adaptive wavelet algorithm is used to pre-process an image. The wavelet filtering is based on wavelet transformation, and then converts signals in a spatial domain into a wavelet domain with time-frequency characteristics, and then uses a wavelet coefficient mapped by threshold value reduction noise, and further obtains a denoised image by inverse transformation. Among them, the threshold selection is the key of the wavelet filter.
The basic idea of wavelet threshold denoising is that after a signal is decomposed through wavelet transform with N layers, a wavelet coefficient generated by the signal contains important information of the signal, the wavelet coefficient of the signal after the wavelet decomposition is larger, the wavelet coefficient of noise is smaller, and the wavelet coefficient of noise is smaller than the wavelet coefficient of the signal. In this embodiment, the design idea of the adaptive wavelet algorithm is as follows:
s11: the conventional expression for setting the wavelet threshold function is:
Figure BDA0002988180010000081
in the above formula, D is a threshold, M is the total number of wavelet coefficients in the wavelet domain of the corresponding layer, and σ is the standard deviation of wavelet domain noise;
s12: designing transform functions of wavelet coefficients
Figure BDA0002988180010000082
Such that when the absolute value of the wavelet coefficient w is less than or equal to the wavelet threshold D, the coefficient is zeroed out; when the absolute value of the wavelet coefficient w is larger than D, the wavelet coefficient is reduced to achieve the soft threshold denoising effect, and the transform function of the wavelet coefficient
Figure BDA0002988180010000083
The expression of (a) is:
Figure BDA0002988180010000084
in the above formula, D is a threshold value, and w is a wavelet coefficient;
s13: corresponding influence factors are introduced into each decomposition layer, so that a wavelet threshold function is improved into an adaptive threshold function, and the requirement of dynamic filtering is met; wherein, the expression of the improved wavelet threshold function is as follows:
Figure BDA0002988180010000085
in the above formula, D is a threshold value, ecRepresenting the corresponding influence factor introduced in the c decomposition layer, wherein sigma is the standard deviation of wavelet domain noise; m is the total number of wavelet coefficients in the wavelet domain of the corresponding layer;
s14: the number of wavelet decomposition layers is set to 3 layers, i.e., c ∈ [1,2,3 ].
S2: constructing a positioning network model based on a network structure of the traditional fast RCNN; increasing a channel attention mechanism in a ResNet module in a positioning network model, and increasing the nonlinear expression capability of the network in the channel dimension by differentiating the weight of each feature map in the channel dimension; pre-training the positioning network model; because the ultrasonic image has the characteristics of noise, fuzziness, less effective information and the like, the method adds the activation function of the algorithm model on the basis that the main body network model adopts fast RCNN, thereby increasing the nonlinearity of the model and achieving the aim of deeply mining the effective characteristic information in the ultrasonic image in high-dimensional information.
The schematic diagram of the positioning network model is shown in fig. 4, and the construction and training process of the positioning network model comprises the following steps:
s21: in this embodiment, the fast RCNN is used as the positioning network, and the network architecture of the fast RCNN is as shown in fig. 2, because of the limitation of the small sample of thyroid nodule ultrasound data, the network is pre-trained by using ImageNet transfer learning, and the locally-trained data is an ultrasound image containing nodules. The method specifically comprises the following steps: in generating the xml file required for local training, only a set of diagonal coordinates of the nodule horizontal external moment is added, and no category information is added.
S22: the RPN in the model is still a multi-task network (two-classification and regression), and the multi-task property of the original detection network reduces the network positioning effect, so that the model is a single-task network with boundary regression as a whole, and the weight of positioning loss is increased during local training, thereby pertinently improving the positioning effect of fast RCNN. Meanwhile, the positioning loss weight in the network loss function is 1, i.e., only the positioning loss function is retained in the present embodiment.
S23: in the positioning network model, a scheme of constructing a multi-scale pyramid is adopted, and the positions of thyroid nodule targets are predicted simultaneously in different layers by utilizing low-layer features and high-layer features respectively.
For most deep learning networks, such as VGG, ResNet, inclusion, etc., the last layer of features of the deep network is used for classification. The method has the advantages of high speed and small memory occupation, and has the disadvantages that only the characteristics of the last layer in a deep network are concerned, but the characteristic information of other layers is ignored, but the information can improve the detection accuracy to a certain extent.
In the embodiment, a multi-scale pyramid scheme is constructed, and the architecture of the FPN network is shown in fig. 3. The reason why the embodiment simultaneously predicts in different layers by using the low-layer features and the high-layer features respectively is that a plurality of nodule targets with different sizes may be present in one thyroid nodule image, different features may be required for distinguishing different targets, only shallow features are required for simple targets to be detected, and complex features are required for complex targets to be detected. In addition, the high-level semantic features have low detection accuracy on small targets, that is, the nodule detection of small targets needs to pay attention to the utilization of low-level features, so the embodiment adopts an FPN structure in the positioning network.
S24: in the ultrasonic image, the shallow layer feature contains a large amount of position information, and the deep layer semantic feature and the shallow layer texture feature are fused through the feature pyramid, so that the accuracy of the regional suggestion of the positioning network can be improved. Different from the conventional FPN, in order to reduce feature redundancy, improve calculation efficiency, and reduce calculation cost, in the present embodiment, in the positioning network model, a three-layer fusion feature of ResNet50+ FPN is used to generate an area recommendation, so that positioning time and cost are greatly reduced, and meanwhile, a channel attention mechanism is used in a feature extraction module in the positioning network to ensure a network positioning effect.
S25: in the positioning network model, the feature pyramid is used for anchor point selection, and since the size of the ultrasound image is small in practical application, the size is about 500 × 500, in order to reduce the time cost and the calculation cost, in this embodiment, the feature pyramid is used for anchor point selection
Discarding the output characteristics of the ResNet50 third residual block;
s26: and training the positioning network by using a self-made data set as a training set on the basis of ImageNet pre-training weight, wherein the data used for training is the corner coordinates of the minimum horizontal circumscribed rectangle corresponding to the true mask, and the evaluation standard of the positioning network model adopts the dice coefficients of the real rectangular frame and the prediction rectangular frame.
In the pre-training process, a good training effect can be achieved only by 100 rounds, in the embodiment, 527 images in the test set are effectively detected in 479 cases, and invalid detection in 48 cases achieves an accuracy rate of 91%, wherein the average value of the two rectangular dice is 90.37%. The positioning test shows that as shown in fig. 5 and fig. 6, fig. 6 is a display of the superposition effect of the positioning result (rectangular box) and the true mask, and from the result, the positioning network has high accuracy regardless of the shape and size of the nodule.
S3: adjusting the network model so as to extract the nodule form information in the ultrasonic image by using a positioning network; and obtaining the aspect ratio information of the nodule by using the rectangular coordinate points output by the positioning network, and acquiring context information about the glandular tissues required by local feature detection.
The visual processing problem is usually heavily dependent on context information, which has two main roles, namely resolving uncertainty, ambiguity and reducing processing time. In this embodiment, the contextual information about the glandular tissue acquired in the positioning network mainly includes the nodule and the peripheral tissue region, and the diagnosis of the nodule property cannot depend only on the feature information of the nodule itself, and needs to pay attention to the interaction information between the nodule and the peripheral region. Therefore, in the classification network, the embodiment adopts the parallel two-way network to input the images containing the peripheral tissue information of different degrees, and enhances the feature extraction for the tissue around the nodule in addition to enhancing the feature information extraction for the nodule itself, so as to improve the classification accuracy through the extracted context information.
The embodiment utilizes an efficient positioning network to extract the nodule morphology information. The positioning network has high precision and good positioning effect, and the coincidence degree of the coordinates output by the network and the gold standard of the minimum external horizontal rectangle is high. Meanwhile, in thyroid nodule diagnosis, the aspect ratio is an important reference information for diagnosis of doctors, so that the aspect ratio information of the nodule is obtained by using the rectangular coordinate points output by the positioning network and is applied to the classification network at the later stage, and the method effectively improves the accuracy of network classification.
S4: constructing a classification network model, wherein the classification network model adopts a ResNet network as a baseline network, and comprises parallel feature extraction networks Net1 and Net 2; meanwhile, a channel attention mechanism is added in a ResNet module in a classification network model, and the nonlinear expression capability of the network in the channel dimension is increased by differentiating the weight of each feature map in the channel dimension; the structure of the classification network model is shown in fig. 7.
S5: establishing a multi-model fused thyroid nodule aggressiveness prediction network, wherein the fused network model has the following prediction processing process on thyroid nodule aggressiveness in an ultrasonic image:
s51: the high-precision positioning of the nodules in the ultrasonic image through the positioning network model obtains two different region images containing the nodules, which are respectively as follows: (1) images containing only nodules; (2) contain a large number of images of environmental information in addition to nodules;
s52: respectively inputting images only containing nodules and images containing a large amount of environmental information besides the nodules into parallel feature extraction networks Net1 and Net 2;
s53: outputting the aspect ratio information of the positioned nodules by using a prediction rectangular box in the positioning network model;
s54: performing feature splicing on the nodule features extracted by the feature extraction networks Net1 and Net2 and the context features in a global average pooling mode, and then splicing the nodule features and the context features with aspect ratio information extracted by a positioning network model;
s55: inputting the complete information spliced in the previous step into a full-connection layer in a classification network model for classification to obtain prediction conclusions of malignant invasion, malignant non-invasion or benign nodules;
s6: and training and updating the classification network model in the converged network model, and storing the model with the highest accuracy in the verification set.
The training process of the classification network model is completed by using a pyrrch frame, the classification network model is trained by adopting ImageNet pre-training weight, and the training stage is completed by adopting dynamic learning rate and early stopping method;
when training a classification network model, according to an original data set and a cutting data set generated by a positioning network model, firstly, utilizing ImageNet pre-training weights to respectively carry out independent classification training on Net1 and Net2, wherein aspect ratio information is not added in the process; and then adding Net1, Net2 and aspect ratio information to the classification network model for joint training. The data used for training is a data set of a certain three hospitals, 4021 cases of images are shared, the proportion of the training set, the verification set and the test set is 6:2:2, and the distribution is reasonable.
The test results are shown in fig. 9, and in the test experiment, the accuracy of the classification network is 84.3% on average, the specificity reaches 80.85%, and the sensitivity reaches 87.42%. During fine tuning, the optimal model weight obtained by training in the early stage is loaded as the pre-training weight in each training, so that the training curve in fig. 8 can be seen to rise fast, and fast convergence can be achieved under a small amount of epochs.
The channel attention mechanism in step S2 and step S4 is implemented by using a residual attention module in a block, and as shown in fig. 10, the processing method of the residual attention module includes the following steps:
the feature map enters two branches after being processed by a 3 × 3 convolution block, when a channel attention mechanism is implemented, the feature map is subjected to global average pooling firstly, the pooled sizes of all the feature maps are 1 × 1, and then the feature map passes through three groups of full-connection layers FC1, FC2 and FC3, and corresponding activation functions in the three groups of full-connection layers are respectively: ReLU, SELU and SELU reduce the number of channels in output to original 1/8 in FC1, and finally enter into FC4 of the fourth full connection layer group, the number of channels is expanded to the original number when FC4 outputs, 1 × 1 feature maps corresponding to each channel in FC4 through a Sigmoid function to be a weight scalar between (0,1), and the channel attention mechanism is realized by multiplying each scalar by the original feature map.
Example 2
As shown in fig. 11, this embodiment provides a thyroid nodule invasiveness prediction system based on target detection, which uses the thyroid nodule invasiveness prediction method based on target detection as in embodiment 1 to predict a nodule invasiveness conclusion in a thyroid ultrasound image, and the system includes:
the device comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for preprocessing a thyroid ultrasound image obtained clinically, eliminating image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set;
the positioning network module is used for accurately positioning the thyroid nodule in the image of the original data set so as to obtain a new image data set only containing the nodule; simultaneously calculating and extracting nodule aspect ratio information and context information of the target; and
the classification network module is used for positioning a new data set generated by the network module after cutting based on the original image data set and positioning the aspect ratio information output by the network module; considering both the nodule areas and the background tissue areas under different visual angles, fusing and multiplexing the features under different visual angles through FPN, and simultaneously obtaining the aspect ratio and the context information by utilizing nodule positioning; and classifying the prediction result of the invasiveness of the nodules in the thyroid ultrasound image to obtain a prediction conclusion of the invasiveness of the nodules.
Example 3
The present embodiment provides a thyroid nodule aggressiveness prediction terminal based on target detection, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the thyroid nodule aggressiveness prediction method based on target detection as in embodiment 1.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A thyroid nodule aggressiveness prediction method based on target detection is characterized by comprising the following steps:
s1: preprocessing a thyroid ultrasound image obtained clinically by adopting a self-adaptive wavelet algorithm, removing image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set;
s2: constructing a positioning network model based on a network structure of the traditional fast RCNN; increasing a channel attention mechanism in a ResNet module in a positioning network model, and increasing the nonlinear expression capability of the network in the channel dimension by differentiating the weight of each feature map in the channel dimension; pre-training the positioning network model; the construction and training process of the positioning network model comprises the following steps:
s21: adopting fast RCNN as a positioning network, adopting ImageNet transfer learning to pre-train the network, wherein the locally trained data adopts an ultrasonic image containing nodules;
s22: the model is integrally adjusted to be a single task network with a boundary regression, and the weight of positioning loss is increased during local training;
s23: in the positioning network model, a scheme of constructing a multi-scale pyramid is adopted, and the positions of thyroid nodule targets are predicted simultaneously in different layers by utilizing bottom-low-layer features and high-layer features respectively;
s24: in a positioning network model, three layers of fusion features of ResNet50+ FPN are adopted to generate an area suggestion, and a channel attention mechanism is adopted in a feature extraction module in a positioning network to ensure the network positioning effect;
s25: in the positioning network model, a characteristic pyramid is adopted for anchor point selection, and the output characteristics of a ResNet50 third residual block are abandoned;
s26: training a positioning network by utilizing a self-made data set on the basis of ImageNet pre-training weight, wherein the data used for training is corner coordinates of a minimum horizontal circumscribed rectangle corresponding to a true mask, and the evaluation standard of a positioning network model adopts the dice coefficients of a real rectangular frame and a prediction rectangular frame;
s3: adjusting the network model so as to extract the nodule form information in the ultrasonic image by using a positioning network; obtaining the aspect ratio information of the nodule by using the rectangular coordinate points output by the positioning network;
s4: constructing a classification network model, wherein the classification network model adopts a ResNet network as a baseline network, and comprises parallel feature extraction networks Net1 and Net 2; meanwhile, a channel attention mechanism is added in a ResNet module in a classification network model, and the nonlinear expression capability of the network in the channel dimension is increased by differentiating the weight of each feature map in the channel dimension;
s5: establishing a multi-model fused thyroid nodule aggressiveness prediction network, wherein the fused network model has the following prediction processing process on thyroid nodule aggressiveness in an ultrasonic image:
s51: the high-precision positioning of the nodules in the ultrasonic image through the positioning network model obtains two different region images containing the nodules, which are respectively as follows: (1) images containing only nodules; (2) images containing a large amount of surrounding tissue in addition to the nodule;
s52: respectively inputting images only containing nodules and images containing a large amount of tissue information besides the nodules into parallel feature extraction networks Net1 and Net 2;
s53: outputting the aspect ratio information of the positioned nodules by using a prediction rectangular box in the positioning network model;
s54: performing feature splicing on the nodule features extracted by the feature extraction networks Net1 and Net2 and the context features in a global average pooling mode, and then splicing the feature splicing information with the aspect ratio information extracted by the positioning network model;
s55: inputting the complete information spliced in the previous step into a full-connection layer in a classification network model for classification to obtain prediction conclusions of malignant invasion, malignant non-invasion or benign nodules;
s6: and training and updating the classification network model in the converged network model, and storing the model with the highest accuracy in the verification set.
2. The target detection-based thyroid nodule aggressiveness prediction method of claim 1, wherein: the design process of the adaptive wavelet algorithm in the method for preprocessing the ultrasonic thyroid image in the step S1 is as follows:
s11: the conventional expression for setting the wavelet threshold function is:
Figure FDA0002988180000000021
in the above formula, D is a threshold, M is the total number of wavelet coefficients in the wavelet domain of the corresponding layer, and σ is the standard deviation of wavelet domain noise;
s12: designing transform functions of wavelet coefficients
Figure FDA0002988180000000022
Such that when the absolute value of the wavelet coefficient w is less than or equal to the wavelet threshold D, the coefficient is zeroed out; when the absolute value of the wavelet coefficient w is larger than D, the wavelet coefficient is reduced to achieve the soft threshold denoising effect, and the transform function of the wavelet coefficient
Figure FDA0002988180000000023
The expression of (a) is:
Figure FDA0002988180000000024
in the above formula, D is a threshold value, and w is a wavelet coefficient;
s13: corresponding influence factors are introduced into each decomposition layer, so that a wavelet threshold function is improved into an adaptive threshold function, and the requirement of dynamic filtering is met; wherein, the expression of the improved wavelet threshold function is as follows:
Figure FDA0002988180000000031
in the above formula, D is a threshold value, ecRepresenting the corresponding influence factor introduced in the c decomposition layer, wherein sigma is the standard deviation of wavelet domain noise; and M is the total number of wavelet coefficients in the wavelet domain of the corresponding layer.
3. The target detection-based thyroid nodule aggressiveness prediction method of claim 2, wherein: in the step S13, the number of wavelet decomposition layers c is set to 3 layers, i.e., c ∈ [1,2,3 ].
4. The target detection-based thyroid nodule aggressiveness prediction method of claim 1, wherein: in step S21, the data generation method for local training includes: in generating the xml file required for local training, only a set of diagonal coordinates of the nodule horizontal external moment is added, and no category information is added.
5. The target detection-based thyroid nodule aggressiveness prediction method of claim 1, wherein: in step S22, the positioning loss weight in the network loss function is 1, that is, only the positioning loss function is retained.
6. The target detection-based thyroid nodule aggressiveness prediction method of claim 1, wherein: the channel attention mechanism in step S2 and step S4 is implemented by adopting a residual attention module in the block, and the processing method of the residual attention module includes the following processes:
the feature map enters two branches after being processed by a 3 × 3 convolution block, when a channel attention mechanism is implemented, the feature map is subjected to global average pooling firstly, the pooled sizes of all the feature maps are 1 × 1, and then the feature map passes through three groups of full-connection layers FC1, FC2 and FC3, and corresponding activation functions in the three groups of full-connection layers are respectively: ReLU, SELU and SELU reduce the number of channels in output to original 1/8 in FC1, and finally enter into FC4 of the fourth full connection layer group, the number of channels is expanded to the original number when FC4 outputs, 1 × 1 feature maps corresponding to each channel in FC4 through a Sigmoid function to be a weight scalar between (0,1), and the channel attention mechanism is realized by multiplying each scalar by the original feature map.
7. The target detection-based thyroid nodule aggressiveness prediction method of claim 1, wherein: in the step S3, the context information about the glandular tissue acquired in the positioning network includes context information about the nodule and the surrounding tissue region, and in the classification network, a parallel two-way network is adopted to input images containing different degrees of surrounding tissue information, and feature information extraction is enhanced for the nodule itself and the nodule surrounding tissue.
8. The target detection-based thyroid nodule aggressiveness prediction method of claim 1, wherein: in the step S6, the training process of the classification network model is completed by using a pyrrch frame, the classification network model is trained by using ImageNet pre-training weights, and the training stage is completed by using a dynamic learning rate and an early-stop method;
when training a classification network model, according to an original data set and a cutting data set generated by a positioning network model, firstly, utilizing ImageNet pre-training weights to respectively carry out independent classification training on Net1 and Net2, wherein aspect ratio information is not added in the process; and then adding Net1, Net2 and aspect ratio information to the classification network model for joint training.
9. A thyroid nodule invasiveness prediction system based on target detection, which is used for predicting a thyroid nodule invasiveness conclusion in a thyroid ultrasound image by the thyroid nodule invasiveness prediction method based on target detection according to any one of claims 1 to 8, and comprises:
the device comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for preprocessing a thyroid ultrasound image obtained clinically, eliminating image noise and reserving edge information of the image in a high-frequency domain to obtain an original data set;
the positioning network module is used for accurately positioning the thyroid nodule in the image of the original data set so as to obtain a new image data set only containing the nodule and a new image data set containing the nodule and a large amount of surrounding tissues; meanwhile, calculating and extracting the nodule aspect ratio information;
the classification network module is used for positioning a new data set generated by the network module after cutting based on the original image data set and positioning the aspect ratio information output by the network module; considering both the nodule areas and the background tissue areas under different visual angles, fusing and multiplexing the features under different visual angles through FPN, and simultaneously obtaining the aspect ratio and the context information by utilizing nodule positioning; and classifying the prediction result of the invasiveness of the nodules in the thyroid ultrasound image to obtain a prediction conclusion of the invasiveness of the nodules.
10. A thyroid nodule aggressiveness prediction terminal based on target detection comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, characterized in that: the processor performs the method for target detection based thyroid nodule aggressiveness prediction according to any one of claims 1-8.
CN202110307648.3A 2021-03-23 2021-03-23 Thyroid nodule invasiveness prediction method based on target detection Active CN112927217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110307648.3A CN112927217B (en) 2021-03-23 2021-03-23 Thyroid nodule invasiveness prediction method based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110307648.3A CN112927217B (en) 2021-03-23 2021-03-23 Thyroid nodule invasiveness prediction method based on target detection

Publications (2)

Publication Number Publication Date
CN112927217A true CN112927217A (en) 2021-06-08
CN112927217B CN112927217B (en) 2022-05-03

Family

ID=76175522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110307648.3A Active CN112927217B (en) 2021-03-23 2021-03-23 Thyroid nodule invasiveness prediction method based on target detection

Country Status (1)

Country Link
CN (1) CN112927217B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643261A (en) * 2021-08-13 2021-11-12 江南大学 Lung disease diagnosis method based on frequency attention network
CN113674247A (en) * 2021-08-23 2021-11-19 河北工业大学 X-ray weld defect detection method based on convolutional neural network
CN113723523A (en) * 2021-08-31 2021-11-30 平安科技(深圳)有限公司 Medical image processing method and device, computer equipment and storage medium
CN113807516A (en) * 2021-09-13 2021-12-17 长城计算机软件与系统有限公司 Training method of neural network model and image retrieval method
CN113837293A (en) * 2021-09-27 2021-12-24 电子科技大学长三角研究院(衢州) mRNA subcellular localization model training method, mRNA subcellular localization model localization method and readable storage medium
CN113902983A (en) * 2021-12-06 2022-01-07 南方医科大学南方医院 Laparoscopic surgery tissue and organ identification method and device based on target detection model
CN114708236A (en) * 2022-04-11 2022-07-05 徐州医科大学 TSN and SSN based thyroid nodule benign and malignant classification method in ultrasonic image
CN115527059A (en) * 2022-08-16 2022-12-27 贵州博睿科讯科技发展有限公司 Road-related construction element detection system and method based on AI (Artificial Intelligence) identification technology
CN115661429A (en) * 2022-11-11 2023-01-31 四川川锅环保工程有限公司 System and method for identifying defects of water wall tube of boiler and storage medium
CN117333435A (en) * 2023-09-15 2024-01-02 什维新智医疗科技(上海)有限公司 Thyroid nodule boundary definition detection method, thyroid nodule boundary definition detection system, electronic equipment and medium
CN117935067A (en) * 2024-03-25 2024-04-26 中国人民解放军火箭军工程大学 SAR image building detection method
CN118014948A (en) * 2024-01-31 2024-05-10 中国能源建设集团安徽省电力设计院有限公司 Fan blade surface fault detection method for small sample unmanned aerial vehicle image

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1839397A (en) * 2003-08-22 2006-09-27 西麦恩公司 Neural network for processing arrays of data with existent topology, such as images, and application of the network
CN107680678A (en) * 2017-10-18 2018-02-09 北京航空航天大学 Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
US20190188541A1 (en) * 2017-03-17 2019-06-20 Chien-Yi WANG Joint 3d object detection and orientation estimation via multimodal fusion
CN110490892A (en) * 2019-07-03 2019-11-22 中山大学 A kind of Thyroid ultrasound image tubercle automatic positioning recognition methods based on USFaster R-CNN
CN110717518A (en) * 2019-09-10 2020-01-21 北京深睿博联科技有限责任公司 Persistent lung nodule identification method and device based on 3D convolutional neural network
CN110991435A (en) * 2019-11-27 2020-04-10 南京邮电大学 Express waybill key information positioning method and device based on deep learning
CN111291683A (en) * 2020-02-08 2020-06-16 内蒙古大学 Dairy cow individual identification system based on deep learning and identification method thereof
CN111326259A (en) * 2020-04-03 2020-06-23 深圳前海微众银行股份有限公司 Disease trend grade determining method, device, equipment and storage medium
CN112270667A (en) * 2020-11-02 2021-01-26 郑州大学 TI-RADS-based integrated deep learning multi-tag identification method
TW202108178A (en) * 2019-05-14 2021-03-01 美商建南德克公司 METHODS OF USING ANTI-CD79b IMMUNOCONJUGATES TO TREAT FOLLICULAR LYMPHOMA

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1839397A (en) * 2003-08-22 2006-09-27 西麦恩公司 Neural network for processing arrays of data with existent topology, such as images, and application of the network
US20190188541A1 (en) * 2017-03-17 2019-06-20 Chien-Yi WANG Joint 3d object detection and orientation estimation via multimodal fusion
CN107680678A (en) * 2017-10-18 2018-02-09 北京航空航天大学 Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
TW202108178A (en) * 2019-05-14 2021-03-01 美商建南德克公司 METHODS OF USING ANTI-CD79b IMMUNOCONJUGATES TO TREAT FOLLICULAR LYMPHOMA
CN110490892A (en) * 2019-07-03 2019-11-22 中山大学 A kind of Thyroid ultrasound image tubercle automatic positioning recognition methods based on USFaster R-CNN
CN110717518A (en) * 2019-09-10 2020-01-21 北京深睿博联科技有限责任公司 Persistent lung nodule identification method and device based on 3D convolutional neural network
CN110991435A (en) * 2019-11-27 2020-04-10 南京邮电大学 Express waybill key information positioning method and device based on deep learning
CN111291683A (en) * 2020-02-08 2020-06-16 内蒙古大学 Dairy cow individual identification system based on deep learning and identification method thereof
CN111326259A (en) * 2020-04-03 2020-06-23 深圳前海微众银行股份有限公司 Disease trend grade determining method, device, equipment and storage medium
CN112270667A (en) * 2020-11-02 2021-01-26 郑州大学 TI-RADS-based integrated deep learning multi-tag identification method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FABIEN H. WAGNER等: "U-Net-Id, an Instance Segmentation Model for Building Extraction from Satellite Images—Case Study in the Joanópolis City, Brazil", 《REMOTE SENSING》 *
PINLE QIN等: "Diagnosis of Benign and Malignant Thyroid Nodules Using Combined Conventional Ultrasound and Ultrasound Elasticity Imaging", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》 *
ZHIQIANG ZHENG等: "A Novel Diagnostic Network for Thyroid Nodules Based on Dense Soft Attention Mechanism", 《BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY》 *
李亮等: "基于多特征融合的甲状腺结节良恶性识别", 《软件导刊》 *
翁智等: "基于改进YOLOv3 的高压输电线路关键部件检测方法", 《计算机应用》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643261A (en) * 2021-08-13 2021-11-12 江南大学 Lung disease diagnosis method based on frequency attention network
CN113674247B (en) * 2021-08-23 2023-09-01 河北工业大学 X-ray weld defect detection method based on convolutional neural network
CN113674247A (en) * 2021-08-23 2021-11-19 河北工业大学 X-ray weld defect detection method based on convolutional neural network
CN113723523A (en) * 2021-08-31 2021-11-30 平安科技(深圳)有限公司 Medical image processing method and device, computer equipment and storage medium
CN113723523B (en) * 2021-08-31 2024-05-10 平安科技(深圳)有限公司 Medical image processing method and device, computer equipment and storage medium
CN113807516A (en) * 2021-09-13 2021-12-17 长城计算机软件与系统有限公司 Training method of neural network model and image retrieval method
CN113807516B (en) * 2021-09-13 2024-05-14 新长城科技有限公司 Training method and image retrieval method of neural network model
CN113837293A (en) * 2021-09-27 2021-12-24 电子科技大学长三角研究院(衢州) mRNA subcellular localization model training method, mRNA subcellular localization model localization method and readable storage medium
CN113902983A (en) * 2021-12-06 2022-01-07 南方医科大学南方医院 Laparoscopic surgery tissue and organ identification method and device based on target detection model
CN113902983B (en) * 2021-12-06 2022-03-25 南方医科大学南方医院 Laparoscopic surgery tissue and organ identification method and device based on target detection model
CN114708236A (en) * 2022-04-11 2022-07-05 徐州医科大学 TSN and SSN based thyroid nodule benign and malignant classification method in ultrasonic image
CN115527059A (en) * 2022-08-16 2022-12-27 贵州博睿科讯科技发展有限公司 Road-related construction element detection system and method based on AI (Artificial Intelligence) identification technology
CN115527059B (en) * 2022-08-16 2024-04-09 贵州博睿科讯科技发展有限公司 System and method for detecting road construction elements based on AI (advanced technology) recognition technology
CN115661429B (en) * 2022-11-11 2023-03-10 四川川锅环保工程有限公司 System and method for identifying defects of boiler water wall pipe and storage medium
CN115661429A (en) * 2022-11-11 2023-01-31 四川川锅环保工程有限公司 System and method for identifying defects of water wall tube of boiler and storage medium
CN117333435A (en) * 2023-09-15 2024-01-02 什维新智医疗科技(上海)有限公司 Thyroid nodule boundary definition detection method, thyroid nodule boundary definition detection system, electronic equipment and medium
CN118014948A (en) * 2024-01-31 2024-05-10 中国能源建设集团安徽省电力设计院有限公司 Fan blade surface fault detection method for small sample unmanned aerial vehicle image
CN117935067A (en) * 2024-03-25 2024-04-26 中国人民解放军火箭军工程大学 SAR image building detection method
CN117935067B (en) * 2024-03-25 2024-05-28 中国人民解放军火箭军工程大学 SAR image building detection method

Also Published As

Publication number Publication date
CN112927217B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN112927217B (en) Thyroid nodule invasiveness prediction method based on target detection
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN111583210B (en) Automatic breast cancer image identification method based on convolutional neural network model integration
CN107403438A (en) Improve the ultrasonoscopy focal zone dividing method of fuzzy clustering algorithm
CN112712528B (en) Intestinal tract focus segmentation method combining multi-scale U-shaped residual error encoder and integral reverse attention mechanism
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
US11935213B2 (en) Laparoscopic image smoke removal method based on generative adversarial network
CN114842238B (en) Identification method of embedded breast ultrasonic image
CN111667459B (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN111598144B (en) Training method and device for image recognition model
CN113781387A (en) Model training method, image processing method, device, equipment and storage medium
CN112950615B (en) Thyroid nodule invasiveness prediction method based on deep learning segmentation network
CN117911772A (en) Thyroid nodule benign and malignant classification method based on segmented multi-feature information
CN118154865A (en) Multi-target segmentation method based on ultrasonic brachial plexus image multi-scale discrete optimization
CN111739047A (en) Tongue image segmentation method and system based on bispectrum reconstruction
CN116681883A (en) Mammary gland image focus detection method based on Swin transducer improvement
CN116229074A (en) Progressive boundary region optimized medical image small sample segmentation method
CN113408595B (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN111275719B (en) Calcification false positive recognition method, device, terminal and medium and model training method and device
CN113936006A (en) Segmentation method and device for processing high-noise low-quality medical image
CN118334364B (en) Infrared image feature extraction method, device and infrared small target tracking method
CN118570202B (en) Ankylosing spondylitis rating method based on visual state space model
CN117115176A (en) PSPNet improved thyroid nodule ultrasonic image automatic segmentation method
CN116935048A (en) DSA image semantic segmentation method, system and storage medium based on knowledge distillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20241009

Address after: Room 401, 4th Floor, Building C, No. 2 South Gate, Xinkang Garden, Huimin Street, Saihan District, Hohhot City, Inner Mongolia Autonomous Region 010010

Patentee after: Inner Mongolia Shuke Health Industry Co.,Ltd.

Country or region after: China

Address before: 010021 No. 235 West University Road, Saihan District, Hohhot City, Inner Mongolia Autonomous Region

Patentee before: INNER MONGOLIA University

Country or region before: China