CN106056595A - Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network - Google Patents

Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network Download PDF

Info

Publication number
CN106056595A
CN106056595A CN201610362069.8A CN201610362069A CN106056595A CN 106056595 A CN106056595 A CN 106056595A CN 201610362069 A CN201610362069 A CN 201610362069A CN 106056595 A CN106056595 A CN 106056595A
Authority
CN
China
Prior art keywords
layer
output
function
nodule
nodules
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610362069.8A
Other languages
Chinese (zh)
Other versions
CN106056595B (en
Inventor
孔德兴
吴法
马金连
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHEJIANG DESHANG YUNXING IMAGE SCIENCE & TECHNOLOGY Co Ltd
Original Assignee
ZHEJIANG DESHANG YUNXING IMAGE SCIENCE & TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG DESHANG YUNXING IMAGE SCIENCE & TECHNOLOGY Co Ltd filed Critical ZHEJIANG DESHANG YUNXING IMAGE SCIENCE & TECHNOLOGY Co Ltd
Publication of CN106056595A publication Critical patent/CN106056595A/en
Application granted granted Critical
Publication of CN106056595B publication Critical patent/CN106056595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to auxiliary medical diagnoses, and aims to provide a method for automatically identifying whether a thyroid nodule is benign or malignant based on a deep convolutional neural network. The method for automatically identifying whether the thyroid nodule is benign or malignant based on the deep convolutional neural network comprises the following steps: reading B ultrasonic data of thyroid nodules; performing preprocessing for thyroid nodule images; selecting images, and obtaining nodule portions and non-nodule portions through segmentations; averagely dividing the extracted ROIs (regions of interest) into p groups, extracting characteristics of the ROIs by utilizing a CNN (convolutional neural network), and performing uniformization; taking p-1 groups of data as a training set, taking the remaining one group to make a test, and obtaining an identification model through training to make the test; and repeating cross validation for p times, and then obtaining an optimum parameter of the identification model. The method can obtain the thyroid nodules through the automatic segmentations by means of the deep convolutional neural network, and makes up for the deficiency that a weak boundary problem cannot be solved based on a movable contour and the like; and the method can automatically lean and extract valuable feature combinations, and prevent the complexity of an artificial feature selection.

Description

Method for automatically identifying benign and malignant thyroid nodules based on deep convolutional neural network
Technical Field
The invention relates to the field of auxiliary medical diagnosis, in particular to a method for automatically identifying benign and malignant thyroid nodules based on a deep convolutional neural network.
Background
In recent years, with the rapid development of computer technology and digital image processing technology, digital image processing technology is increasingly applied to the field of auxiliary medical diagnosis, and the principle of the technology is to perform image processing technologies such as segmentation, reconstruction, registration, identification and the like on medical images acquired by different modes so as to obtain valuable medical diagnosis information.
Thyroid nodules are now a ubiquitous epidemic and investigations have shown that the incidence of thyroid nodules in the population is nearly 50%, but only 4% -8% of thyroid nodules are accessible in clinical palpation. Thyroid nodules are classified as benign and malignant, and the incidence of malignancy is 5% -10%. Early detection of lesions is of great significance in identifying benign and malignant lesions, clinical treatment and surgical selection. The thyroid nodule ultrasonic examination based on the ultrasonic imaging technology has the advantages of real-time imaging, relatively low examination cost, no wound to patients and the like. And the thyroid is positioned on the surface layer, so that the thyroid is suitable for ultrasonic image diagnosis. The doctor mainly relies on the puncture biopsy cell examination to diagnose the benign and malignant thyroid gland, so the workload can be very large, and the result of the doctor diagnosing the ultrasonic thyroid gland image is often influenced by the imaging mechanism, the acquisition condition, the display equipment and other factors of the medical imaging equipment, so that misdiagnosis or missed diagnosis is very easy to cause. Therefore, it is necessary to use a computer to perform thyroid image-assisted diagnosis. However, the inherent imaging mechanism causes the quality of the clinically acquired ultrasonic thyroid tumor images to be poor, which causes the accuracy and automation of auxiliary diagnosis to be affected, so that the most current thyroid nodule segmentation is semi-automatic segmentation based on active contour, classification mainly comprises manual feature selection, and then classification recognition such as SVM, KNN, decision tree and the like is utilized, and the classifiers only have good effect on small sample data, but the medical data is massive, and the classification recognition of a large sample can have better auxiliary effect on the medical diagnosis.
Disclosure of Invention
The invention mainly aims to overcome the defects in the prior art and provide a method for automatically identifying benign and malignant thyroid nodules based on a deep convolutional neural network. In order to solve the technical problem, the solution of the invention is as follows:
a method for automatically identifying benign and malignant thyroid nodules based on a deep convolutional neural network is provided, and comprises the following processes:
firstly, reading B ultrasonic data of thyroid nodules;
secondly, preprocessing the thyroid nodule image;
selecting an image, automatically learning and segmenting a nodule part and a non-nodule part by using a Convolutional Neural Network (CNN) (convolutional neural network), wherein the nodule part is an interested Region (ROI), and refining the shape of the nodule;
fourthly, dividing the ROI extracted in the third step into p groups on average, extracting the characteristics of the ROI by using CNN, and normalizing;
fifthly, selecting p-1 group data in the fourth step as a training set, testing the rest group, and training a recognition model through CNN for testing;
sixthly, repeating the step five, performing p times of cross tests to obtain the optimal parameters of the recognition model, and finally determining an auxiliary diagnosis system for automatically recognizing the benign and malignant thyroid nodules based on the deep convolutional neural network;
the first process specifically comprises the following steps: reading thyroid nodule images (which can be in a picture format and can also be standard dicom pictures) comprising at least 5000 images of benign nodules and at least 5000 images of malignant nodules;
the second process specifically comprises the following steps: carrying out image graying on the thyroid nodule image read in the first process, removing marks made by doctors for measuring nodule related quantity in the ultrasonic image by utilizing gray values of surrounding pixel points, denoising by utilizing Gaussian filtering, and finally carrying out equalization enhancement on contrast by utilizing a gray histogram to obtain a preprocessed enhanced image;
the third process specifically comprises the following steps:
step 1: selecting 10000 enhanced images preprocessed by the second process, wherein the enhanced images comprise 5000 good and malignant nodules;
step 2: for each picture, firstly (by an expert) manually cutting out nodule parts and non-nodule parts, and then training out an automatic segmentation model through CNN;
the CNN is a network structure consisting of 13 convolutional layers and 2 downsampling layers; the sizes of convolution kernels of the convolutional layers are respectively: the first layer was 13x13, the second and third layers were 5x5, and the remaining layers were 3x 3; the step sizes of the convolutional layers are respectively as follows: the first two convolutional layers are 2, the remainder are 1; the size of the down-sampling layers is 3x3, and the step size is 2;
the specific method for training the automatic segmentation model through the CNN comprises the following steps:
(1) the method comprises the following steps of automatically learning features through a CNN convolution layer and a down-sampling layer, and extracting the features, wherein the method comprises the following specific steps:
step A: on a convolution layer, the feature maps of the upper layer are convoluted by a convolution kernel which can be learned, and then an output feature map can be obtained through an activation function; each output is the value of a convolution kernel convolving one input or combining multiple convolved inputs (here we choose to combine the values of the convolved multiple incoming and outgoing maps):
x j l = f ( Σ i ∈ M j x i l - 1 * k i j l + b j l )
wherein the symbol denotes a convolution operationAn operator; the l represents the number of layers; the i represents the ith neuron node of the l-1 layer; the j represents the jth neuron node of the l layer; the M isjRepresenting a set of selected input maps; the above-mentionedRefers to the output of the l-1 layer as the input of the l layer; f is an activation function, here a sigmoid functionAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; the k is a convolution operator; b is a bias; each output map is given an additional offset b, but the convolution kernel that convolves each input map is different for a particular output map;
this step also requires a gradient calculation to update the sensitivity, which is used to indicate how much b changes, how much the error will change:
wherein l represents the number of layers; the j represents the jth neuron node of the l layer; o represents each element multiplication; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; s isl=Wlxl-1+bl,xl-1Refers to the output of the l-1 layer, W is the weight, b is the offset; f is an activation function, here a sigmoid functionAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; f' (x) is the derivative function of f (x) (i.e. if f takes the sigmoid function)Then f' (x) ═ (1-f (x)); the above-mentionedRepresenting the weight shared by each layer; up () denotes an upsampling operation (if the downsampling sampling factor is n, the upsampling operation is to copy each pixel n times horizontally and vertically, so that the original size can be restored);
then sum all nodes in the sensitivity map in layer l, fast calculate the gradient of bias b:
∂ E ∂ b j l = Σ u , v ( δ j l ) u , v
wherein l represents the number of layers; the j represents the jth neuron node of the l layer; b represents a bias; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; the u, v represents the (u, v) position of the output maps; said E is an error function, whereThe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedAn h output representing a network output corresponding to the n sample;
and finally, calculating the weight of the convolution kernel by using a BP algorithm:
ΔW l = - η ∂ E ∂ W l
wherein W is a weight parameter; said E is an error function, andthe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedThe h output representing the net output corresponding to the n sample, η being the learning rate, i.e. the step size, since the weights of many connections are shared, for a given weight, it is necessary to gradient the point for all connections associated with the weight and then sum the gradients:
∂ E ∂ k i j l = Σ u , v ( δ j l ) u , v ( p i l - 1 ) u , v
wherein l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; b represents the bias, which represents the sensitivity of the output neuron, i.e. the rate of change of bias b; the u, v represents the (u, v) position of the output maps; said E is an error function, whereThe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedDenotes the n-thH output of the network output corresponding to the sample; the above-mentionedIs a convolution kernel; the above-mentionedIs thatWhen convolved withThe value of the (u, v) position of the output convolution map is determined by the (u, v) position of the previous layer, which is the region block in all the pictures with the same size as the convolution kernel, and the element-by-element multiplication patchThe result of element-by-element multiplication;
and B: the down-sampling layer has N input maps, and has N output maps, and only if each output map is reduced, then has:
x j l = f ( β j l d o w n ( x j l - 1 ) + b j l )
wherein f is an activation function, here a sigmoid function is takenAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; the above-mentionedSumming all pixels of different nxn blocks of the input image, so that the output image is reduced by n times in two dimensions (in this case, each element of the input image is taken as a block with the size of 3x3, and then all elements are summed as the value of the element in the output image, so that the output image is reduced by 3 times in two dimensions);
parameters β and b are updated by the gradient descent method:
∂ E ∂ b j = Σ u , v ( δ j l ) u , v
wherein the conv2 is a two-dimensional convolution operator; the rot180 is rotated 180 degrees; the 'full' means that complete convolution is performed; the l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; b represents a bias; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; the u, v represents the (u, v) position of the output maps; said E is an error function, the expression is as above, i.e.The C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedRepresenting the h output of the network output corresponding to the n sample, wherein β is a weight parameter (generally taking a value of 0,1)]) (ii) a Down (.) represents a downsampling function; the above-mentionedIs the convolution kernel of layer l + 1; the above-mentionedIs the jth neuron node of the output of layer l-1; s isl=Wlxl-1+blWhere W is a weight parameter, b is an offset,is slThe jth component of (a);
and C: the CNN automatically learns the combination of the feature maps, and the jth feature map combination is as follows:
x j l = f ( Σ i = 1 N i n α i j ( x i l - 1 * k i l ) + b j l )
s.t.∑iαij=1,and 0≤αij≤1.
wherein the symbol denotes the convolution operator; the l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; f is an activation function, here a sigmoid functionAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; the above-mentionedIs the ith component of the l-1 layer output; said N isinRepresenting the number of maps input; what is needed isThe above-mentionedIs a convolution kernel; the above-mentionedIs an offset, said αijRepresenting the output map of the l-1 layer as the input of the l layer, and obtaining the weight or contribution of the ith input map of the jth output map by the l-1 layer;
(2) automatically identifying nodules by using the features extracted in the step (1) and combining Softmax, and determining a model for automatic segmentation; in particular, if a sample is given, the Softmax recognition procedure outputs a probability value indicating the probability that the sample belongs to a class of several classes, and the loss function is:
J ( θ ) = - 1 m [ Σ i = 1 m Σ j = 1 c 1 { y ( i ) = j } l o g e θ j T x ( i ) Σ l = 1 c e θ l T x ( i ) ] + λ 2 Σ i = 1 c Σ j = 0 n θ i j 2
wherein m represents a total of m samples; the c represents that the samples can be divided into c types in total; the above-mentionedIs a matrix, each row is the parameter corresponding to a category, namely weight and bias; said 1{ } is an indicative function, i.e. when the value in the brace is true, the result of the function is 1, otherwise the result is 0; λ is a parameter that balances the fidelity term (first term) with the regularization term (second term), where λ takes a positive number (its magnitude is adjusted according to experimental results); the J (theta) refers to a loss function of the system; said e represents the Euler number 2.718281828, exIs an exponential function; the T is a transpose operator in the representation matrix calculation; log represents the natural logarithm, i.e., the logarithm based on the euler number; n represents the dimension of the weight and bias parameters;x(i)Is the ith dimension of the input vector; y is(i)Is the ith dimension of each sample label; then the gradient is used to solve:
▿ θ j J ( θ ) = - 1 m [ Σ i = 1 m x ( i ) ( 1 { y ( i ) = j } - p ( y ( i ) = j | x ( i ) ; θ ) ) ] + λθ j
wherein,the m represents a total of m samples; the above-mentionedIs a matrix, each row is the parameter corresponding to a category, namely weight and bias; said 1{ } is an indicative function, i.e. when the value in the brace is true, the result of the function is 1, otherwise the result is 0; λ is a parameter that balances the fidelity term (first term) with the regularization term (second term), where λ takes a positive number (its magnitude is adjusted according to experimental results); the J (theta) refers to a loss function of the system;is the J (θ) derivative function; said e represents the Euler number 2.718281828, exIs an exponential function; the T is a transpose operator in the representation matrix calculation; log represents the natural logarithm, i.e., the logarithm based on the euler number; x is the number of(i) Is the ith dimension of the input vector; y is(i)Is the ith dimension of each sample label;
(where a new Softmax classifier, i.e., a Softmax classifier with only two classes is used, for a thyroid image, a probability map can be obtained for distinguishing all nodule regions from non-nodule regions based on the probability given by Softmax, and a rough segmentation of nodule regions can be obtained based on the map)
(3) Automatically segmenting nodules of all thyroid glands by using CNN (CNN), namely distinguishing a nodule region from a non-nodule region, finding out the boundary of the nodule region, and refining the shapes of the segmented nodules, namely filling holes and removing the connection with the non-nodule region by using corrosion and expansion morphological operators;
and 3, step 3: automatically segmenting all thyroid nodule pictures (namely 10000 pictures) by using the model obtained in the step 2 to obtain an ROI (region of interest), namely all benign and malignant nodules;
the fourth process specifically comprises the following steps: dividing the ROI automatically segmented in the third process into p groups on average, normalizing the data, namely extracting the characteristics of the nodules after automatically segmenting the nodules, and performing linear transformation on the characteristics to map a result value to [0,1 ];
the fifth process specifically comprises the following steps: the method for extracting features from all ROIs (the method for extracting features in a specific process and three automatic segmentation processes is the same, except that the object only aims at a nodule region, three convolutional layers are reduced in a network structure compared with the automatic segmentation process, 3 fully-connected layers are added, the number of neuron nodes is 64, 64 and 1, the sizes of convolutional cores are respectively 13x13 in the first layer, 5x5 in the second layer and the third layer, and 3x3 in the rest layers, step sizes are respectively 2 in the first three convolutional layers and 1 in the rest layers, the sizes of downsampling layers are 3x3 and 2, and features are extracted from non-nodule regions and nodule regions at the same time in the automatic segmentation part);
then, a new Softmax classification is utilized, namely, only two classes of Softmax classifiers are utilized to solve the optimal value of a loss function, namely, optimization J (theta), wherein the class number c of the Softmax classifiers is equal to 2 (namely, benign nodules and malignant nodules); the probability of belonging to a benign nodule or a malignant nodule can be obtained by a gradient descent method, and the specific process is the same as the method of the automatic segmentation process in the third process (only a classification label is predicted according to the probabilities, and then the benign and malignant diagnosis is carried out on one nodule);
the sixth process specifically comprises the following steps: repeating the process five, namely selecting p-1 group of data for training each time for p group of data, and performing the rest tests to finally obtain the optimal parameters of the recognition model, thereby obtaining the auxiliary diagnosis system for automatically recognizing the benign and malignant thyroid nodules based on the deep convolutional neural network;
the thyroid nodule image to be identified is input into the auxiliary diagnosis system, and the benign and malignant diagnosis of the nodule can be obtained.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, thyroid nodules can be automatically segmented by means of the deep convolutional neural network, the defect that the problem of weak boundary cannot be solved based on the active contour and the like is overcome, valuable feature combinations can be automatically learned and extracted, the complexity of manually selecting features is avoided, the extracted features are more favorable for finding out main rule information of benign and malignant thyroid nodules, the accuracy of a recognition system is improved, and high adaptability is obtained.
Drawings
Fig. 1 is a flow chart for identifying benign and malignant thyroid nodules based on a deep convolutional neural network.
Fig. 2 is a diagram of a convolutional neural network architecture for automatically segmenting and identifying thyroid nodules.
Fig. 3 is a raw picture of thyroid nodules used in the examples.
FIG. 4 is a photograph of a mask drawn by the expert for the thyroid nodule region of FIG. 3.
Fig. 5 is a raw picture of thyroid nodules in the examples.
Fig. 6 is a picture of the effect of automatically segmenting the nodule region of fig. 5 using CNN.
Detailed Description
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the following examples are presented to enable those skilled in the art to more fully understand the present invention and are not intended to limit the invention in any way.
A method for automatically identifying benign and malignant thyroid nodules based on a deep convolutional neural network is shown in figure 1 and comprises the following steps:
firstly, reading B ultrasonic data of thyroid nodules;
secondly, preprocessing the thyroid nodule image;
thirdly, selecting images (including as many good and malignant nodule images) and automatically learning and segmenting nodule parts and non-nodule parts by using a Convolutional Neural Network (CNN), wherein the nodule parts are regions of interest (ROI), and refining the shapes of the nodules;
and fourthly, dividing the ROI extracted in the third step into p groups on average, extracting the characteristics of the ROI by using CNN, and normalizing.
Fifthly, selecting p-1 group data in the fourth step as a training set, testing the rest group, and training a model through CNN for testing;
sixthly, repeating the step five, performing p times of cross tests to obtain the optimal parameters of the recognition model, and finally determining an auxiliary diagnosis system for automatically recognizing the benign and malignant thyroid nodules based on the deep convolutional neural network;
the first process specifically comprises the following steps: the B-mode ultrasound data of thyroid nodules are read in a picture format or in a standard dicom picture. Comprising at least 5000 images of benign nodules and at least 5000 images of malignant nodules; in the fifth process, all the pictures (i.e. p-1 group of data) in the training set need to be read in first to train out the auxiliary diagnosis system for automatically identifying the benign and malignant thyroid nodules based on the deep convolutional neural network, and then the data of the remaining 1 group is read in to test the system. When the system is used for auxiliary diagnosis for automatically identifying benign and malignant thyroid nodules, all pictures of the nodules to be diagnosed are read;
the second process specifically comprises the following steps: carrying out image graying on the thyroid nodule image read in the first process, removing marks made by doctors for measuring nodule related quantity in the ultrasonic image by utilizing gray values of surrounding pixel points, denoising by utilizing Gaussian filtering, and finally carrying out equalization enhancement on contrast by utilizing a gray histogram to obtain a preprocessed enhanced image;
the third process specifically comprises the following steps:
step 1: selecting 10000 enhanced images preprocessed by the second process, wherein the enhanced images comprise 5000 good and malignant nodules;
step 2: intercepting a nodule part and a non-nodule part by an expert, and then training an automatic segmentation model through CNN; the CNN described here is a network structure composed of 13 convolutional layers and 2 downsampling layers, and the sizes of the convolutional cores are: the first layer was 13x13, the second and third layers were 5x5, and the remaining layers were 3x3, steps were: the first two convolutional layers are 2, the remainder are 1. The size of the down-sampling layers is 3x3, and the step size is 2; the specific convolutional neural network structure is shown in fig. 2;
the specific method for training the automatic segmentation model through the CNN comprises the following steps:
(1) the method comprises the following steps of automatically learning features through a CNN convolution layer and a down-sampling layer, and extracting the features, wherein the method comprises the following specific steps:
step A: on a convolution layer, the feature maps of the upper layer are convoluted by a convolution kernel which can be learned, and then an output feature map can be obtained through an activation function; each output is the value of a convolution kernel convolving one input or combining multiple convolved inputs (here we choose to combine the values of the convolved multiple incoming and outgoing maps):
x j l = f ( Σ i ∈ M j x i l - 1 * k i j l + b j l )
wherein the symbol denotes the convolution operator; the l represents the number of layers; the i represents the ith neuron node of the l-1 layer; the j represents the jth neuron node of the l layer; the M isjRepresenting a set of selected input maps; the above-mentionedIs the output; the above-mentionedIs the output of the l-1 layer and is used as the input of the ll layer; f is an activation function, here a sigmoid functionAs an activation function; said e represents the Euler number 2.718281828, exIs an exponential function; the k is a convolution operator; b is a bias; each output map is given an additional offset b, but the convolution kernel that convolves each input map is different for a particular output map;
this step also requires a gradient calculation to update the sensitivity, which is used to indicate how much b changes, how much the error will change:
wherein l represents the number of layers; the j represents the jth neuron node of the l layer; o represents each element multiplication; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; s isl=Wlxl-1+bl(ii) a W is a weight; b is an offset; f is an activation function, here a sigmoid functionAs an activation function; said e represents the Euler number 2.718281828, exIs an exponential function, f' (x) is the derivative function of f (x), if f is the sigmoid functionThen f' (x) ═ (1-f (x)) f (x); the above-mentionedRepresenting the weight shared by each layer; up (.) represents an upsampling operation, and if the sampling factor of the downsampling is n, the upsampling operation is to copy each pixel in the horizontal direction and the vertical direction for n times, so that the original size can be recovered;
then sum all nodes in the sensitivity map in layer l, fast calculate the gradient of bias b:
∂ E ∂ b j l = Σ u , v ( δ j l ) u , v
wherein l represents the number of layers; the j represents the jth neuron node of the l layer; b represents a bias; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; the u, v represents the (u, v) position of the output maps; said E is an error function, whereThe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedAn h output representing a network output corresponding to the n sample;
and finally, calculating the weight of the convolution kernel by using a BP algorithm:
ΔW l = - η ∂ E ∂ W l
wherein W is a weight parameter; said E is an error function, andthe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedAn h output representing a network output corresponding to the n sample; the above-mentionedη is the learning rate, i.e. the step size, since the weights of many connections are shared, for a given weight, it is necessary to gradient the point for all connections associated with the weight and then sum the gradients:
∂ E ∂ k i j l = Σ u , v ( δ j l ) u , v ( p i l - 1 ) u , v
wherein l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; b represents the bias, which represents the sensitivity of the output neuron, i.e. the rate of change of bias b; the u, v represents the (u, v) position of the output maps; said E is an error function, whereThe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written asIs yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedAn h output representing a network output corresponding to the n sample; the above-mentionedIs a convolution kernel; the above-mentionedIs thatWhen convolved withThe value of the (u, v) position of the output convolution map is determined by the (u, v) position of the previous layer, which is the region block in all the pictures with the same size as the convolution kernel, and the element-by-element multiplication patchThe result of element-by-element multiplication;
and B: the down-sampling layer has N input maps, and has N output maps, and only if each output map is reduced, then has:
x j l = f ( β j l d o w n ( x j l - 1 ) + b j l )
wherein f is an activation function, here a sigmoid function is takenAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; the above-mentionedSumming all pixels of different nxn blocks of the input image, so that the output image is reduced by n times in two dimensions (in this case, each element of the input image is taken as a block with the size of 3x3, and then all elements are summed as the value of the element in the output image, so that the output image is reduced by 3 times in two dimensions);
parameters β and b are updated by the gradient descent method:
∂ E ∂ b j = Σ u , v ( δ j l ) u , v
wherein the conv2 is a two-dimensional convolution operator; the rot180 is rotated 180 degrees; the 'full' means that complete convolution is performed; the l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; b represents a bias; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; the u, v represents the (u, v) position of the output maps; said E is an error function, the expression is as above, i.e.The C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedRepresenting the h output of the network output corresponding to the n sample, wherein β is a weight parameter (generally taking a value of 0,1)]) (ii) a Down (.) represents a downsampling function; the above-mentionedIs the convolution kernel of layer l + 1; the above-mentionedIs the jth neuron node of the output of layer l-1; what is needed isS isl=Wlxl-1+blWhere W is a weight parameter, b is an offset,is slThe jth component of (a);
and C: the CNN automatically learns the combination of the feature maps, and the jth feature map combination is as follows:
x i l = f ( Σ i = 1 N i n α i j ( x i l - 1 * k i l ) + b j l )
s.t.iαij=1,and 0≤αij≤1.
wherein the symbol denotes the convolution operator; the l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; f is an activation function, here a sigmoid functionAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; the above-mentionedIs the ith component of the l-1 layer output; said N isinRepresenting the number of maps input; the above-mentionedIs a convolution kernel; the above-mentionedIs an offset, said αijRepresenting the output map of the l-1 layer as the input of the l layer, and obtaining the weight or contribution of the ith input map of the jth output map by the l-1 layer;
(2) automatically identifying nodules by using the features extracted in the step (1) and combining Softmax, and determining a model for automatic segmentation; in particular, if a sample is given, the Softmax recognition procedure outputs a probability value indicating the probability that the sample belongs to a class of several classes, and the loss function is:
J ( θ ) = - 1 m [ Σ i = 1 m Σ j = 1 c 1 { y ( i ) = j } l o g e θ j T x ( i ) Σ l = 1 c e θ l T x ( i ) ] + λ 2 Σ i = 1 c Σ j = 0 n θ i j 2
wherein m represents a total of m samples; the c represents that the samples can be divided into c types in total; the above-mentionedIs a matrix, each row is the parameter corresponding to a category, namely weight and bias; said 1{ } is an indicative function, i.e. when the value in the brace is true, the result of the function is 1, otherwise the result is 0; what is needed isλ is a parameter for balancing the fidelity term (first term) with the regularization term (second term), where λ is a positive number (its magnitude is adjusted according to experimental results); the J (theta) refers to a loss function of the system; said e represents the Euler number 2.718281828, exIs an exponential function; the T is a transpose operator in the representation matrix calculation; log represents the natural logarithm, i.e., the logarithm based on the euler number; n represents the dimension of the weight and bias parameters; x is the number of(i)Is the ith dimension of the input vector; y is(i)Is the ith dimension of each sample label; then the gradient is used to solve:
▿ θ j J ( θ ) = - 1 m [ Σ i = 1 m x ( i ) ( 1 { y ( i ) = j } - p ( y ( i ) = j | x ( i ) ; θ ) ) ] + λθ j
wherein,the m represents a total of m samples; the above-mentionedIs a matrix, each row is the parameter corresponding to a category, namely weight and bias; said 1{ } is an indicative function, i.e. when the value in the brace is true, the result of the function is 1, otherwise the result is 0; λ is a parameter that balances the fidelity term (first term) with the regularization term (second term), where λ takes a positive number (its magnitude is adjusted according to experimental results); the J (theta) refers to a loss function of the system;is the J (θ) derivative function; said e represents the Euler number 2.718281828, exIs an exponential function; the T is a transpose operator in the representation matrix calculation; log represents the natural logarithm, i.e., the logarithm based on the euler number; x is the number of(i)Is the ith dimension of the input vector; y is(i)Is the ith dimension of each sample label;
(where a new Softmax classifier, i.e., a Softmax classifier with only two classes is used, for a thyroid image, a probability map can be obtained for distinguishing all nodule regions from non-nodule regions based on the probability given by Softmax, and a rough segmentation of nodule regions can be obtained based on the map)
(3) Automatically segmenting nodules of all thyroid glands by using CNN (CNN), namely distinguishing a nodule region from a non-nodule region, finding out the boundary of the nodule region, and refining the shapes of the segmented nodules, namely filling holes and removing the connection with the non-nodule region by using corrosion and expansion morphological operators;
and 3, step 3: automatically segmenting all thyroid nodule pictures (namely 10000 pictures) by using the model obtained in the step 2 to obtain an ROI (region of interest), namely all benign and malignant nodules;
the fourth process specifically comprises the following steps: dividing the ROI automatically segmented in the third process into p groups on average, normalizing the data, namely extracting the characteristics of the nodules after automatically segmenting the nodules, and performing linear transformation on the characteristics to map a result value to [0,1 ];
the fifth process specifically comprises the following steps: the method for extracting features from all ROIs (the method for extracting features in a specific process and three automatic segmentation processes is the same, except that the object only aims at a nodule region, three convolutional layers are reduced in a network structure compared with the automatic segmentation process, 3 fully-connected layers are added, the number of neuron nodes is 64, 64 and 1, the sizes of convolutional cores are respectively 13x13 in the first layer, 5x5 in the second layer and the third layer, and 3x3 in the rest layers, step sizes are respectively 2 in the first three convolutional layers and 1 in the rest layers, the sizes of downsampling layers are 3x3 and 2, and features are extracted from non-nodule regions and nodule regions at the same time in the automatic segmentation part); the specific convolutional neural network structure is shown in fig. 2;
then, a new Softmax classification is utilized, namely, only two classes of Softmax classifiers are utilized to solve the optimal value of a loss function, namely, optimization J (theta), wherein the class number p of the Softmax classifiers is equal to 2 (namely, benign nodules and malignant nodules); the probability of belonging to a benign nodule or a malignant nodule can be obtained by a gradient descent method, and the specific process is the same as the method of the automatic segmentation process in the third process (only a classification label is predicted according to the probabilities, and then the benign and malignant diagnosis is carried out on one nodule);
the sixth process specifically comprises the following steps: and repeating the experiment of the fifth process, namely selecting p-1 group of data for training each time for p group of data, and performing the rest tests to finally obtain the optimal parameters of the recognition model, thereby obtaining the auxiliary diagnosis system for automatically recognizing the benign and malignant thyroid nodules based on the deep convolutional neural network. The thyroid nodule image to be identified is input into the auxiliary diagnosis system, and the benign and malignant diagnosis of the nodule can be obtained.
Fig. 3 and 4 are images showing the original image of thyroid nodule and the mask image of corresponding nodule region used in the experiment; fig. 5 and 6 show an original picture of thyroid nodules and an effect picture of automatically segmenting a nodule region mask by using CNN.
Finally, it should be noted that the above-mentioned list is only a specific embodiment of the present invention. It is obvious that the present invention is not limited to the above embodiments, but many variations are possible. All modifications which can be derived or suggested by a person skilled in the art from the disclosure of the present invention are to be considered within the scope of the invention.

Claims (1)

1. The method for automatically identifying the benign and malignant thyroid nodules based on the deep convolutional neural network is characterized by comprising the following processes:
firstly, reading B ultrasonic data of thyroid nodules;
secondly, preprocessing the thyroid nodule image;
thirdly, selecting an image, automatically learning and segmenting a nodular part and a non-nodular part by using a Convolutional Neural Network (CNN), wherein the nodular part is an interested Region (ROI), and thinning the shape of the nodular part;
fourthly, dividing the ROI extracted in the third step into p groups on average, extracting the characteristics of the ROI by using CNN, and normalizing;
fifthly, selecting p-1 group data in the fourth step as a training set, testing the rest group, and training a recognition model through CNN for testing;
sixthly, repeating the step five, performing p times of cross tests to obtain the optimal parameters of the recognition model, and finally determining an auxiliary diagnosis system for automatically recognizing the benign and malignant thyroid nodules based on the deep convolutional neural network;
the first process specifically comprises the following steps: reading thyroid nodule images including at least 5000 images of benign nodules and at least 5000 images of malignant nodules;
the second process specifically comprises the following steps: carrying out image graying on the thyroid nodule image read in the first process, removing marks made by doctors for measuring nodule related quantity in the ultrasonic image by utilizing gray values of surrounding pixel points, denoising by utilizing Gaussian filtering, and finally carrying out equalization enhancement on contrast by utilizing a gray histogram to obtain a preprocessed enhanced image;
the third process specifically comprises the following steps:
step 1: selecting 10000 enhanced images preprocessed by the second process, wherein the enhanced images comprise 5000 good and malignant nodules;
step 2: for each picture, firstly, manually cutting out a nodule part and a non-nodule part, and then training an automatic segmentation model through CNN;
the CNN is a network structure consisting of 13 convolutional layers and 2 downsampling layers; the sizes of convolution kernels of the convolutional layers are respectively: the first layer was 13x13, the second and third layers were 5x5, and the remaining layers were 3x 3; the step sizes of the convolutional layers are respectively as follows: the first two convolutional layers are 2, the remainder are 1; the size of the down-sampling layers is 3x3, and the step size is 2;
the specific method for training the automatic segmentation model through the CNN comprises the following steps:
(1) the method comprises the following steps of automatically learning features through a CNN convolution layer and a down-sampling layer, and extracting the features, wherein the method comprises the following specific steps:
step A: on a convolution layer, the feature maps of the upper layer are convoluted by a convolution kernel which can be learned, and then an output feature map can be obtained through an activation function; each output is the value of a convolution kernel convolving one input or combining multiple convolved inputs:
x j l = f ( Σ i ∈ M j x i l - 1 * k i j l + b j l )
wherein the symbol denotes the convolution operator; the l represents the number of layers; the i represents the ith neuron node of the l-1 layer; the j represents the jth neuron node of the l layer; the M isjRepresenting a set of selected input maps; the above-mentionedRefers to the output of the l-1 layer as the input of the l layer; f is an activation function, here a sigmoid functionAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; the k is a convolution operator; b is a bias; each output map is given an additional offset b, but the convolution kernel that convolves each input map is different for a particular output map;
this step also requires a gradient calculation to update the sensitivity, which is used to indicate how much b changes, how much the error will change:
wherein l represents the number of layers; the j represents the jth neuron node of the l layer; o represents each element multiplication; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; s isl=Wlxl-1+bl,xl-1Refers to the output of the l-1 layer, W is the weight, b is the offset; f is an activation function, here a sigmoid functionAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; f' (x) is the derivative function of f (x); the above-mentionedRepresenting the weight shared by each layer; the up (.) represents an upsample operation;
then sum all nodes in the sensitivity map in layer l, fast calculate the gradient of bias b:
∂ E ∂ b j l = Σ u , v ( δ j l ) u , v
wherein l represents the number of layers; the j represents the jth neuron node of the l layer; b represents a bias; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; the u, v represents the (u, v) position of the output maps; said E is an error function, whereThe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedAn h output representing a network output corresponding to the n sample;
and finally, calculating the weight of the convolution kernel by using a BP algorithm:
ΔW l = - η ∂ E ∂ W l
wherein W is a weight parameter; said E is an error function, andthe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedThe h output representing the net output corresponding to the n sample, η being the learning rate, i.e. the step size, since the weights of many connections are shared, for a given weight, it is necessary to gradient the point for all connections associated with the weight and then sum the gradients:
∂ E ∂ k i j l = Σ u , v ( δ j l ) u , v ( p i l - 1 ) u , v
wherein l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; b represents the bias, which represents the sensitivity of the output neuron, i.e. the rate of change of bias b; said u, v represents the output ma(u, v) position of ps; said E is an error function, whereThe C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedAn h output representing a network output corresponding to the n sample; the above-mentionedIs a convolution kernel; the above-mentionedIs thatWhen convolved withThe value of the (u, v) position of the output convolution map is determined by the (u, v) position of the previous layer, which is the region block in all the pictures with the same size as the convolution kernel, and the element-by-element multiplication patchThe result of element-by-element multiplication;
and B: the down-sampling layer has N input maps, and has N output maps, and only if each output map is reduced, then has:
x j l = f ( β j l d o w n ( x j l - 1 ) + b j l )
wherein f is an activation function, here a sigmoid function is takenAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; the above-mentionedThe down (.) represents a down-sampling function, all pixels of different nxn blocks of the input image are summed, so that the output image is reduced by n times in two dimensions, and each output map corresponds to a weight parameter β belonging to the output map and an additive bias b;
parameters β and b are updated by the gradient descent method:
∂ E ∂ b j = Σ u , v ( δ j l ) u , v
wherein the conv2 is a two-dimensional convolution operator; the rot180 is rotated 180 degrees; the 'full' means that complete convolution is performed; the l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; b represents a bias; the sensitivity of the representation output neuron, i.e. the rate of change of the bias b; the u, v represents the (u, v) position of the output maps; said E is an error function, the expression is as above, i.e.The C represents the dimension of the label, and if the problem is two-classification, the label can be marked as yh∈ {0,1}, where C is 1, it can also be written as yh∈ { (0,1), (1,0) }, when C ═ 2; saidThe h dimension of the label corresponding to the nth sample is represented; the above-mentionedRepresenting the h output of the network output corresponding to the n sample, β being a weight parameter, down representing a down-sampling function, andis the convolution kernel of layer l + 1; the above-mentionedIs the jth neuron node of the output of layer l-1; s isl=Wlxl-1+blWhere W is a weight parameter, b is an offset,is slThe jth component of (a);
and C: the CNN automatically learns the combination of the feature maps, and the jth feature map combination is as follows:
x j l = f ( Σ i = 1 N i n α i j ( x i l - 1 * k i l ) + b j l )
s.t.∑iαij=1,and 0≤αij≤1.
wherein the symbol denotes the convolution operator; the l represents the number of layers; the i represents the ith neuron node of the l layer; the j represents the jth neuron node of the l layer; f is an activation function, here a sigmoid functionAs an activation function, e denotes the euler number 2.718281828, exIs an exponential function; the above-mentionedIs the ith component of the l-1 layer output; said N isinRepresenting the number of maps input; the above-mentionedIs a convolution kernel; the above-mentionedIs an offset, said αijRepresenting the output map of the l-1 layer as the input of the l layer, and obtaining the weight or contribution of the ith input map of the jth output map by the l-1 layer;
(2) automatically identifying nodules by using the features extracted in the step (1) and combining Softmax, and determining a model for automatic segmentation; in particular, if a sample is given, the Softmax recognition procedure outputs a probability value indicating the probability that the sample belongs to a class of several classes, and the loss function is:
J ( θ ) = - 1 m [ Σ i = 1 m Σ j = 1 c 1 { y ( i ) = j } l o g e θ j T x ( i ) Σ l = 1 c e θ l T x ( i ) ] + λ 2 Σ i = 1 c Σ j = 0 n θ i j 2
wherein m represents a total of m samples; the c represents that the samples can be divided into c types in total; the above-mentionedIs a matrix, each row is the parameter corresponding to a category, namely weight and bias; said 1{. is an indicative function, i.e. when the value in the brace is true, the result of the function is 1, otherwiseThe result was 0; the lambda is a parameter for balancing the fidelity term and the regular term, wherein lambda is a positive number; the J (theta) refers to a loss function of the system; said e represents the Euler number 2.718281828, exIs an exponential function; the T is a transpose operator in the representation matrix calculation; log represents the natural logarithm, i.e., the logarithm based on the euler number; n represents the dimension of the weight and bias parameters; x is the number of(i)Is the ith dimension of the input vector; y is(i)Is the ith dimension of each sample label; then the gradient is used to solve:
▿ θ j J ( θ ) = - 1 m [ Σ i = 1 m x ( i ) ( 1 { y ( i ) = j } - p ( y ( i ) = j | x ( i ) ; θ ) ) ] + λθ j
wherein,the m represents a total of m samples; the above-mentionedIs a matrix, each row is the parameter corresponding to a category, namely weight and bias; said 1{ } is an indicative function, i.e. when the value in the brace is true, the result of the function is 1, otherwise the result is 0; the lambda is a parameter for balancing the fidelity term and the regular term, wherein lambda is a positive number; the J (theta) refers to a loss function of the system;is the J (θ) derivative function; said e represents the Euler number 2.718281828, exIs an exponential function; the T is a transpose operator in the representation matrix calculation; log represents the natural logarithm, i.e., the logarithm based on the euler number; x is the number of(i)Is the ith dimension of the input vector; y is(i)Is the ith dimension of each sample label;
(3) automatically segmenting nodules of all thyroid glands by using CNN (CNN), namely distinguishing a nodule region from a non-nodule region, finding out the boundary of the nodule region, and refining the shapes of the segmented nodules, namely filling holes and removing the connection with the non-nodule region by using corrosion and expansion morphological operators;
and 3, step 3: automatically segmenting all thyroid nodule pictures by using the model obtained in the step 2 to obtain an ROI (region of interest), namely all benign and malignant nodules;
the fourth process specifically comprises the following steps: dividing the ROI automatically segmented in the third process into p groups on average, normalizing the data, namely extracting the characteristics of the nodules after automatically segmenting the nodules, and performing linear transformation on the characteristics to map a result value to [0,1 ];
the fifth process specifically comprises the following steps: using a CNN training recognition model to extract features of all ROIs;
then, a new Softmax classification is used, namely only two Softmax classifiers are used for solving the optimal value of a loss function, namely the optimization J (theta), wherein the class number c of the Softmax classifiers is equal to 2; the probability of belonging to a benign nodule or a malignant nodule can be obtained by a gradient descent method, and the specific process is the same as the method of the automatic segmentation process in the third process;
the sixth process specifically comprises the following steps: repeating the process five, namely selecting p-1 group of data for training each time for p group of data, and performing the rest tests to finally obtain the optimal parameters of the recognition model, thereby obtaining the auxiliary diagnosis system for automatically recognizing the benign and malignant thyroid nodules based on the deep convolutional neural network;
the thyroid nodule image to be identified is input into the auxiliary diagnosis system, and the benign and malignant diagnosis of the nodule can be obtained.
CN201610362069.8A 2015-11-30 2016-05-26 Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules Active CN106056595B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2015108619029 2015-11-30
CN201510861902 2015-11-30

Publications (2)

Publication Number Publication Date
CN106056595A true CN106056595A (en) 2016-10-26
CN106056595B CN106056595B (en) 2019-09-17

Family

ID=57175505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610362069.8A Active CN106056595B (en) 2015-11-30 2016-05-26 Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules

Country Status (1)

Country Link
CN (1) CN106056595B (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709421A (en) * 2016-11-16 2017-05-24 广西师范大学 Cell image recognition and classification method based on transform domain characteristics and CNN (Convolutional Neural Network)
CN106780448A (en) * 2016-12-05 2017-05-31 清华大学 A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features
CN106846301A (en) * 2016-12-29 2017-06-13 北京理工大学 Retinal images sorting technique and device
CN106971198A (en) * 2017-03-03 2017-07-21 北京市计算中心 A kind of pneumoconiosis grade decision method and system based on deep learning
CN107066759A (en) * 2017-05-12 2017-08-18 华北电力大学(保定) A kind of Vibration Fault Diagnosis of Turbine Rotor method and device
CN107194929A (en) * 2017-06-21 2017-09-22 太原理工大学 A kind of good pernicious sorting technique of Lung neoplasm based on depth belief network
CN107247971A (en) * 2017-06-28 2017-10-13 中国人民解放军总医院 The intelligent analysis method and system of a kind of ultrasonic thyroid nodule risk indicator
CN107316004A (en) * 2017-06-06 2017-11-03 西北工业大学 Space Target Recognition based on deep learning
CN107358605A (en) * 2017-05-04 2017-11-17 深圳硅基智能科技有限公司 For identifying the deep neural network and system of diabetic retinopathy
CN107423571A (en) * 2017-05-04 2017-12-01 深圳硅基仿生科技有限公司 Diabetic retinopathy identifying system based on eye fundus image
CN107424152A (en) * 2017-08-11 2017-12-01 联想(北京)有限公司 The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid
CN107492099A (en) * 2017-08-28 2017-12-19 京东方科技集团股份有限公司 Medical image analysis method, medical image analysis system and storage medium
CN107529645A (en) * 2017-06-29 2018-01-02 重庆邮电大学 A kind of heart sound intelligent diagnosis system and method based on deep learning
CN107680678A (en) * 2017-10-18 2018-02-09 北京航空航天大学 Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
CN107886506A (en) * 2017-11-08 2018-04-06 华中科技大学 A kind of ultrasonic thyroid nodule automatic positioning method
CN108010031A (en) * 2017-12-15 2018-05-08 厦门美图之家科技有限公司 A kind of portrait dividing method and mobile terminal
CN108257135A (en) * 2018-02-01 2018-07-06 浙江德尚韵兴图像科技有限公司 The assistant diagnosis system of medical image features is understood based on deep learning method
CN108573491A (en) * 2017-03-10 2018-09-25 南京大学 A kind of three-dimensional ultrasound pattern dividing method based on machine learning
CN108717700A (en) * 2018-04-09 2018-10-30 杭州依图医疗技术有限公司 A kind of method and device of detection tubercle length electrical path length
CN108717554A (en) * 2018-05-22 2018-10-30 复旦大学附属肿瘤医院 A kind of thyroid tumors histopathologic slide image classification method and its device
CN108846840A (en) * 2018-06-26 2018-11-20 张茂 Lung ultrasound image analysis method, device, electronic equipment and readable storage medium storing program for executing
CN108962387A (en) * 2018-06-14 2018-12-07 暨南大学附属第医院(广州华侨医院) A kind of thyroid nodule Risk Forecast Method and system based on big data
CN109087703A (en) * 2018-08-24 2018-12-25 南京大学 Abdominal cavity CT image peritonaeum metastatic marker method based on depth convolutional neural networks
CN109360633A (en) * 2018-09-04 2019-02-19 北京市商汤科技开发有限公司 Medical imaging processing method and processing device, processing equipment and storage medium
CN109493333A (en) * 2018-11-08 2019-03-19 四川大学 Ultrasonic Calcification in Thyroid Node point extraction algorithm based on convolutional neural networks
CN109685143A (en) * 2018-12-26 2019-04-26 上海市第十人民医院 A kind of thyroid gland technetium sweeps the identification model construction method and device of image
CN109829889A (en) * 2018-12-27 2019-05-31 清影医疗科技(深圳)有限公司 A kind of ultrasound image processing method and its system, equipment, storage medium
CN109919187A (en) * 2019-01-28 2019-06-21 浙江工商大学 With bagging fine tuning CNN come the method for Thyroid Follicular picture of classifying
CN109961838A (en) * 2019-03-04 2019-07-02 浙江工业大学 A kind of ultrasonic image chronic kidney disease auxiliary screening method based on deep learning
CN110021022A (en) * 2019-02-21 2019-07-16 哈尔滨理工大学 A kind of thyroid gland nuclear medical image diagnostic method based on deep learning
CN110033456A (en) * 2019-03-07 2019-07-19 腾讯科技(深圳)有限公司 A kind of processing method of medical imaging, device, equipment and system
CN110313036A (en) * 2016-11-09 2019-10-08 国家科学研究中心 For quantifying the multiparameter method of balance
CN110337669A (en) * 2017-01-27 2019-10-15 爱克发医疗保健公司 Multiclass image partition method
CN110706209A (en) * 2019-09-17 2020-01-17 东南大学 Method for positioning tumor in brain magnetic resonance image of grid network
CN110706793A (en) * 2019-09-25 2020-01-17 天津大学 Attention mechanism-based thyroid nodule semi-supervised segmentation method
CN111091560A (en) * 2019-12-19 2020-05-01 广州柏视医疗科技有限公司 Nasopharyngeal carcinoma primary tumor image identification method and system
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111461158A (en) * 2019-05-22 2020-07-28 什维新智医疗科技(上海)有限公司 Method, apparatus, storage medium, and system for identifying features in ultrasound images
CN111539930A (en) * 2020-04-21 2020-08-14 浙江德尚韵兴医疗科技有限公司 Dynamic ultrasonic breast nodule real-time segmentation and identification method based on deep learning
CN111553919A (en) * 2020-05-12 2020-08-18 上海深至信息科技有限公司 Thyroid nodule analysis system based on elastic ultrasonic imaging
CN111798455A (en) * 2019-09-25 2020-10-20 天津大学 Thyroid nodule real-time segmentation method based on full convolution dense cavity network
CN112166474A (en) * 2018-05-16 2021-01-01 皇家飞利浦有限公司 Automated tumor identification during surgery using machine learning
CN112419396A (en) * 2020-12-03 2021-02-26 前线智能科技(南京)有限公司 Thyroid ultrasonic video automatic analysis method and system
WO2021054901A1 (en) * 2019-09-19 2021-03-25 Ngee Ann Polytechnic Automated system and method of monitoring anatomical structures
CN112614118A (en) * 2020-12-29 2021-04-06 浙江明峰智能医疗科技有限公司 CT image prediction method based on deep learning and computer readable storage medium
US10993653B1 (en) 2018-07-13 2021-05-04 Johnson Thomas Machine learning based non-invasive diagnosis of thyroid disease
CN112862822A (en) * 2021-04-06 2021-05-28 华侨大学 Ultrasonic breast tumor detection and classification method, device and medium
CN113168912A (en) * 2018-09-04 2021-07-23 艾登斯 Ip 有限公司 Determining growth rate of objects in 3D data sets using deep learning
CN113421228A (en) * 2021-06-03 2021-09-21 山东师范大学 Thyroid nodule identification model training method and system based on parameter migration
CN113689412A (en) * 2021-08-27 2021-11-23 中国人民解放军总医院第六医学中心 Thyroid image processing method and device, electronic equipment and storage medium
CN113822386A (en) * 2021-11-24 2021-12-21 苏州浪潮智能科技有限公司 Image identification method, device, equipment and medium
CN114708236A (en) * 2022-04-11 2022-07-05 徐州医科大学 TSN and SSN based thyroid nodule benign and malignant classification method in ultrasonic image
US11937973B2 (en) 2018-05-31 2024-03-26 Mayo Foundation For Medical Education And Research Systems and media for automatically diagnosing thyroid nodules

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102165454A (en) * 2008-09-29 2011-08-24 皇家飞利浦电子股份有限公司 Method for increasing the robustness of computer-aided diagnosis to image processing uncertainties
CN103745227A (en) * 2013-12-31 2014-04-23 沈阳航空航天大学 Method for identifying benign and malignant lung nodules based on multi-dimensional information
CN104200224A (en) * 2014-08-28 2014-12-10 西北工业大学 Valueless image removing method based on deep convolutional neural networks
CN104809443A (en) * 2015-05-05 2015-07-29 上海交通大学 Convolutional neural network-based license plate detection method and system
CN104850836A (en) * 2015-05-15 2015-08-19 浙江大学 Automatic insect image identification method based on depth convolutional neural network
CN104933672A (en) * 2015-02-26 2015-09-23 浙江德尚韵兴图像科技有限公司 Rapid convex optimization algorithm based method for registering three-dimensional CT and ultrasonic liver images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102165454A (en) * 2008-09-29 2011-08-24 皇家飞利浦电子股份有限公司 Method for increasing the robustness of computer-aided diagnosis to image processing uncertainties
CN103745227A (en) * 2013-12-31 2014-04-23 沈阳航空航天大学 Method for identifying benign and malignant lung nodules based on multi-dimensional information
CN104200224A (en) * 2014-08-28 2014-12-10 西北工业大学 Valueless image removing method based on deep convolutional neural networks
CN104933672A (en) * 2015-02-26 2015-09-23 浙江德尚韵兴图像科技有限公司 Rapid convex optimization algorithm based method for registering three-dimensional CT and ultrasonic liver images
CN104809443A (en) * 2015-05-05 2015-07-29 上海交通大学 Convolutional neural network-based license plate detection method and system
CN104850836A (en) * 2015-05-15 2015-08-19 浙江大学 Automatic insect image identification method based on depth convolutional neural network

Cited By (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110313036A (en) * 2016-11-09 2019-10-08 国家科学研究中心 For quantifying the multiparameter method of balance
CN106709421A (en) * 2016-11-16 2017-05-24 广西师范大学 Cell image recognition and classification method based on transform domain characteristics and CNN (Convolutional Neural Network)
CN106709421B (en) * 2016-11-16 2020-03-31 广西师范大学 Cell image identification and classification method based on transform domain features and CNN
CN106780448A (en) * 2016-12-05 2017-05-31 清华大学 A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features
CN106846301A (en) * 2016-12-29 2017-06-13 北京理工大学 Retinal images sorting technique and device
CN110337669A (en) * 2017-01-27 2019-10-15 爱克发医疗保健公司 Multiclass image partition method
CN110337669B (en) * 2017-01-27 2023-07-25 爱克发医疗保健公司 Pipeline method for segmenting anatomical structures in medical images in multiple labels
CN106971198A (en) * 2017-03-03 2017-07-21 北京市计算中心 A kind of pneumoconiosis grade decision method and system based on deep learning
CN108573491A (en) * 2017-03-10 2018-09-25 南京大学 A kind of three-dimensional ultrasound pattern dividing method based on machine learning
CN107358605B (en) * 2017-05-04 2018-09-21 深圳硅基仿生科技有限公司 The deep neural network apparatus and system of diabetic retinopathy for identification
CN107423571A (en) * 2017-05-04 2017-12-01 深圳硅基仿生科技有限公司 Diabetic retinopathy identifying system based on eye fundus image
CN107358605A (en) * 2017-05-04 2017-11-17 深圳硅基智能科技有限公司 For identifying the deep neural network and system of diabetic retinopathy
CN107423571B (en) * 2017-05-04 2018-07-06 深圳硅基仿生科技有限公司 Diabetic retinopathy identifying system based on eye fundus image
CN107066759B (en) * 2017-05-12 2020-12-01 华北电力大学(保定) Steam turbine rotor vibration fault diagnosis method and device
CN107066759A (en) * 2017-05-12 2017-08-18 华北电力大学(保定) A kind of Vibration Fault Diagnosis of Turbine Rotor method and device
CN107316004A (en) * 2017-06-06 2017-11-03 西北工业大学 Space Target Recognition based on deep learning
CN107194929A (en) * 2017-06-21 2017-09-22 太原理工大学 A kind of good pernicious sorting technique of Lung neoplasm based on depth belief network
CN107194929B (en) * 2017-06-21 2020-09-15 太原理工大学 Method for tracking region of interest of lung CT image
CN107247971B (en) * 2017-06-28 2020-10-09 中国人民解放军总医院 Intelligent analysis method and system for ultrasonic thyroid nodule risk index
CN107247971A (en) * 2017-06-28 2017-10-13 中国人民解放军总医院 The intelligent analysis method and system of a kind of ultrasonic thyroid nodule risk indicator
CN107529645A (en) * 2017-06-29 2018-01-02 重庆邮电大学 A kind of heart sound intelligent diagnosis system and method based on deep learning
CN107529645B (en) * 2017-06-29 2019-09-10 重庆邮电大学 A kind of heart sound intelligent diagnosis system and method based on deep learning
CN107424152A (en) * 2017-08-11 2017-12-01 联想(北京)有限公司 The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid
US10706333B2 (en) 2017-08-28 2020-07-07 Boe Technology Group Co., Ltd. Medical image analysis method, medical image analysis system and storage medium
CN107492099A (en) * 2017-08-28 2017-12-19 京东方科技集团股份有限公司 Medical image analysis method, medical image analysis system and storage medium
CN107680678A (en) * 2017-10-18 2018-02-09 北京航空航天大学 Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
CN107886506A (en) * 2017-11-08 2018-04-06 华中科技大学 A kind of ultrasonic thyroid nodule automatic positioning method
CN108010031A (en) * 2017-12-15 2018-05-08 厦门美图之家科技有限公司 A kind of portrait dividing method and mobile terminal
CN108257135A (en) * 2018-02-01 2018-07-06 浙江德尚韵兴图像科技有限公司 The assistant diagnosis system of medical image features is understood based on deep learning method
CN108717700B (en) * 2018-04-09 2021-11-30 杭州依图医疗技术有限公司 Method and device for detecting length of long diameter and short diameter of nodule
CN108717700A (en) * 2018-04-09 2018-10-30 杭州依图医疗技术有限公司 A kind of method and device of detection tubercle length electrical path length
CN112166474A (en) * 2018-05-16 2021-01-01 皇家飞利浦有限公司 Automated tumor identification during surgery using machine learning
CN108717554A (en) * 2018-05-22 2018-10-30 复旦大学附属肿瘤医院 A kind of thyroid tumors histopathologic slide image classification method and its device
US11937973B2 (en) 2018-05-31 2024-03-26 Mayo Foundation For Medical Education And Research Systems and media for automatically diagnosing thyroid nodules
CN108962387A (en) * 2018-06-14 2018-12-07 暨南大学附属第医院(广州华侨医院) A kind of thyroid nodule Risk Forecast Method and system based on big data
CN108846840A (en) * 2018-06-26 2018-11-20 张茂 Lung ultrasound image analysis method, device, electronic equipment and readable storage medium storing program for executing
CN108846840B (en) * 2018-06-26 2021-11-09 张茂 Lung ultrasonic image analysis method and device, electronic equipment and readable storage medium
US10993653B1 (en) 2018-07-13 2021-05-04 Johnson Thomas Machine learning based non-invasive diagnosis of thyroid disease
CN109087703B (en) * 2018-08-24 2022-06-07 南京大学 Peritoneal transfer marking method of abdominal cavity CT image based on deep convolutional neural network
CN109087703A (en) * 2018-08-24 2018-12-25 南京大学 Abdominal cavity CT image peritonaeum metastatic marker method based on depth convolutional neural networks
CN113168912A (en) * 2018-09-04 2021-07-23 艾登斯 Ip 有限公司 Determining growth rate of objects in 3D data sets using deep learning
US11996198B2 (en) 2018-09-04 2024-05-28 Aidence Ip B.V. Determination of a growth rate of an object in 3D data sets using deep learning
CN113168912B (en) * 2018-09-04 2023-12-01 艾登斯 Ip 有限公司 Determining growth rate of objects in 3D dataset using deep learning
CN109360633A (en) * 2018-09-04 2019-02-19 北京市商汤科技开发有限公司 Medical imaging processing method and processing device, processing equipment and storage medium
CN109493333A (en) * 2018-11-08 2019-03-19 四川大学 Ultrasonic Calcification in Thyroid Node point extraction algorithm based on convolutional neural networks
CN109685143A (en) * 2018-12-26 2019-04-26 上海市第十人民医院 A kind of thyroid gland technetium sweeps the identification model construction method and device of image
CN109829889A (en) * 2018-12-27 2019-05-31 清影医疗科技(深圳)有限公司 A kind of ultrasound image processing method and its system, equipment, storage medium
CN109919187A (en) * 2019-01-28 2019-06-21 浙江工商大学 With bagging fine tuning CNN come the method for Thyroid Follicular picture of classifying
CN110021022A (en) * 2019-02-21 2019-07-16 哈尔滨理工大学 A kind of thyroid gland nuclear medical image diagnostic method based on deep learning
CN109961838A (en) * 2019-03-04 2019-07-02 浙江工业大学 A kind of ultrasonic image chronic kidney disease auxiliary screening method based on deep learning
CN110033456A (en) * 2019-03-07 2019-07-19 腾讯科技(深圳)有限公司 A kind of processing method of medical imaging, device, equipment and system
CN110033456B (en) * 2019-03-07 2021-07-09 腾讯科技(深圳)有限公司 Medical image processing method, device, equipment and system
CN111461158A (en) * 2019-05-22 2020-07-28 什维新智医疗科技(上海)有限公司 Method, apparatus, storage medium, and system for identifying features in ultrasound images
CN110706209A (en) * 2019-09-17 2020-01-17 东南大学 Method for positioning tumor in brain magnetic resonance image of grid network
CN110706209B (en) * 2019-09-17 2022-04-29 东南大学 Method for positioning tumor in brain magnetic resonance image of grid network
WO2021054901A1 (en) * 2019-09-19 2021-03-25 Ngee Ann Polytechnic Automated system and method of monitoring anatomical structures
CN111798455B (en) * 2019-09-25 2023-07-04 天津大学 Thyroid nodule real-time segmentation method based on full convolution dense cavity network
CN110706793A (en) * 2019-09-25 2020-01-17 天津大学 Attention mechanism-based thyroid nodule semi-supervised segmentation method
CN111798455A (en) * 2019-09-25 2020-10-20 天津大学 Thyroid nodule real-time segmentation method based on full convolution dense cavity network
CN111091560A (en) * 2019-12-19 2020-05-01 广州柏视医疗科技有限公司 Nasopharyngeal carcinoma primary tumor image identification method and system
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111539930A (en) * 2020-04-21 2020-08-14 浙江德尚韵兴医疗科技有限公司 Dynamic ultrasonic breast nodule real-time segmentation and identification method based on deep learning
CN111539930B (en) * 2020-04-21 2022-06-21 浙江德尚韵兴医疗科技有限公司 Dynamic ultrasonic breast nodule real-time segmentation and identification method based on deep learning
CN111553919A (en) * 2020-05-12 2020-08-18 上海深至信息科技有限公司 Thyroid nodule analysis system based on elastic ultrasonic imaging
CN111553919B (en) * 2020-05-12 2022-12-30 上海深至信息科技有限公司 Thyroid nodule analysis system based on elastic ultrasonic imaging
CN112419396A (en) * 2020-12-03 2021-02-26 前线智能科技(南京)有限公司 Thyroid ultrasonic video automatic analysis method and system
CN112419396B (en) * 2020-12-03 2024-04-26 前线智能科技(南京)有限公司 Automatic thyroid ultrasonic video analysis method and system
CN112614118B (en) * 2020-12-29 2022-06-21 浙江明峰智能医疗科技有限公司 CT image prediction method based on deep learning and computer readable storage medium
CN112614118A (en) * 2020-12-29 2021-04-06 浙江明峰智能医疗科技有限公司 CT image prediction method based on deep learning and computer readable storage medium
CN112862822B (en) * 2021-04-06 2023-05-30 华侨大学 Ultrasonic breast tumor detection and classification method, device and medium
CN112862822A (en) * 2021-04-06 2021-05-28 华侨大学 Ultrasonic breast tumor detection and classification method, device and medium
CN113421228A (en) * 2021-06-03 2021-09-21 山东师范大学 Thyroid nodule identification model training method and system based on parameter migration
CN113689412A (en) * 2021-08-27 2021-11-23 中国人民解放军总医院第六医学中心 Thyroid image processing method and device, electronic equipment and storage medium
CN113822386B (en) * 2021-11-24 2022-02-22 苏州浪潮智能科技有限公司 Image identification method, device, equipment and medium
CN113822386A (en) * 2021-11-24 2021-12-21 苏州浪潮智能科技有限公司 Image identification method, device, equipment and medium
CN114708236A (en) * 2022-04-11 2022-07-05 徐州医科大学 TSN and SSN based thyroid nodule benign and malignant classification method in ultrasonic image
CN114708236B (en) * 2022-04-11 2023-04-07 徐州医科大学 Thyroid nodule benign and malignant classification method based on TSN and SSN in ultrasonic image

Also Published As

Publication number Publication date
CN106056595B (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
Oskal et al. A U-net based approach to epidermal tissue segmentation in whole slide histopathological images
CN107644420B (en) Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN112380900A (en) Deep learning-based cervical fluid-based cell digital image classification method and system
CN111640120A (en) Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
CN111598875A (en) Method, system and device for building thyroid nodule automatic detection model
Ashwin et al. Efficient and reliable lung nodule detection using a neural network based computer aided diagnosis system
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
WO2021183765A1 (en) Automated detection of tumors based on image processing
CN114240961A (en) U-Net + + cell division network system, method, equipment and terminal
CN112001895B (en) Thyroid calcification detection device
CN110766670A (en) Mammary gland molybdenum target image tumor localization algorithm based on deep convolutional neural network
Yonekura et al. Improving the generalization of disease stage classification with deep CNN for glioma histopathological images
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
CN112990214A (en) Medical image feature recognition prediction model
CN112700461A (en) System for pulmonary nodule detection and characterization class identification
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN115909006A (en) Mammary tissue image classification method and system based on convolution Transformer
CN113139928A (en) Training method of pulmonary nodule detection model and pulmonary nodule detection method
CN113177554B (en) Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN116778250A (en) Coronary artery lesion classification method based on transfer learning and CBAM
CN111339993A (en) X-ray image metal detection method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310012 Room 709, 710, 7-storey East Building, No. 90 Wensan Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Deshang Yunxing Medical Technology Co., Ltd.

Address before: Room 801/802, 8-storey East Science and Technology Building, Building 6, East Software Park, No. 90 Wensan Road, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG DESHANG YUNXING IMAGE SCIENCE & TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant