CN106056595B - Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules - Google Patents
Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules Download PDFInfo
- Publication number
- CN106056595B CN106056595B CN201610362069.8A CN201610362069A CN106056595B CN 106056595 B CN106056595 B CN 106056595B CN 201610362069 A CN201610362069 A CN 201610362069A CN 106056595 B CN106056595 B CN 106056595B
- Authority
- CN
- China
- Prior art keywords
- indicates
- output
- layers
- function
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to complementary medicine diagnosis, it is desirable to provide a kind of system pernicious based on depth convolutional neural networks automatic identification Benign Thyroid Nodules.The system is handled as follows ultrasonic thyroid tumors image using computer technology: reading the B ultrasound data of thyroid nodule;Thyroid nodule image is pre-processed;It chooses image segmentation and goes out tubercle part and non-nodules part;The ROI extracted is divided into p group, the feature of these ROI is extracted using CNN, and is normalized;It selects p-1 group data and does training set, remaining one group is tested, and is trained identification model and is tested;P crosscheck is done in repetition, obtains the optimal parameter of identification model.The present invention can not be partitioned into thyroid nodule automatically only by depth convolutional neural networks, compensate for the deficiency that not can solve weak boundary problem based on active contour etc., and can learn to extract valuable feature combination out automatically, avoid the complexity of artificial selected characteristic.
Description
Technical field
The present invention relates to complementary medicine diagnostic fields, in particular to are based on depth convolutional neural networks automatic identification first shape
The good pernicious system of gland tubercle.
Background technique
In recent years, with the rapid development of computer technology and digital image processing techniques, digital image processing techniques are got over
Come it is more be applied to complementary medicine diagnostic field, principle is exactly to divide the medical image obtained by different modes
It cuts, reconstructs, registration, the image processing techniques such as identification, to obtain valuable medical diagnostic information, main purpose is to make to cure
Raw observation diseased region is more directly and clear, provides auxiliary reference for doctor's clinical definite, has very important reality meaning
Justice.
Thyroid nodule is a kind of now generally existing epidemic disease, has investigation to point out the hair of the thyroid nodule in crowd
Raw rate nearly 50%, but the thyroid nodule of only 4%-8% can be accessible in physical palpation.Thyroid nodule has good, evil
Property point, pernicious incidence be 5%-10%.Early detection lesion has to its good pernicious, clinical treatment and surgical selection is identified
Significance.Thyroid nodule ultrasonic examination based on ultrasonic imaging technique because can real time imagery, inspection fee it is relatively low, right
Sufferer hurtless measure etc..And thyroid gland is located at surface layer, is suitble to ultrasonic image diagnosis.And diagnosis is thyroid good pernicious main
By puncturing living tissue cells inspection, such workload meeting will be very big, and the result of diagnosis ultrasound thyroid gland image
Suffer from the influences of factors such as the imaging mechanisms of medical imaging devices, acquisition condition, display equipment and easily cause mistaken diagnosis or
It fails to pinpoint a disease in diagnosis.Therefore, realize that the diagnosis of thyroid gland visual aids is very necessary using computer.But intrinsic image-forming mechanism makes clinic
Collected ultrasound thyroid tumors picture quality is poor, and the accuracy of auxiliary diagnosis and automation is caused to be affected,
So current segmentation thyroid nodule it is most be the semi-automatic segmentation based on active contour, classify and mainly manually select
Then feature utilizes SVM, KNN, the Classification and Identifications such as decision tree, these classifiers can only can have Small Sample Database preferably
Effect, but medical data is magnanimity, and the Classification and Identification of large sample can just have better booster action to medical diagnosis.
Summary of the invention
It is a primary object of the present invention to overcome deficiency in the prior art, provide a kind of based on depth convolutional neural networks
The pernicious method of automatic identification Benign Thyroid Nodules.In order to solve the above technical problems, solution of the invention is:
The method pernicious based on depth convolutional neural networks automatic identification Benign Thyroid Nodules, including following processes are provided:
One, the B ultrasound data of thyroid nodule are read;
Two, thyroid nodule image is pre-processed;
Three, choose image utilize convolutional neural networks, i.e. CNN (convolutional neural network), automatically
Study is partitioned into tubercle part and non-nodules part, and tubercle part is exactly area-of-interest, i.e. ROI (region of
Interest it), and to nodule shape refines;
Four, the ROI for extracting step 3 is divided into p group, the feature of these ROI is extracted using CNN, and carry out
Normalization;
Five, p-1 group data in step 4 are selected and do training set, remaining one group is tested, and trains identification model by CNN
It is tested;
Six, step 5 is repeated, p crosscheck is done, obtains the optimal parameter of identification model, final determine is rolled up based on depth
The pernicious assistant diagnosis system of product neural network recognition Benign Thyroid Nodules;
The process one specifically: reading thyroid nodule image (can be picture format, be also possible to standard
Dicom picture), the image of image and at least 5000 Malignant Nodules including at least 5000 benign protuberances;
The process two specifically: the thyroid nodule image for reading process one first carries out image gray processing, and utilize
It is the label for measuring tubercle correlative and doing that the gray value of surrounding pixel point, which removes doctor in ultrasound image, and gaussian filtering is recycled to go
It makes an uproar, finally enhances contrast using gray-level histogram equalizationization, obtain pretreated enhancing image;
The process three specifically:
Step 1: selection is opened through the pretreated enhancing image 10000 of process two, including good Malignant Nodules each 5000;
Step 2: to each picture, (by expert), Manual interception goes out tubercle part and non-nodules part first, then leads to
It crosses CNN and trains the model divided automatically;
The network structure that the CNN is made of 13 layers of convolutional layer, 2 layers of down-sampling layer;The size of the convolution kernel of convolutional layer
Being respectively as follows: first layer is 13x13, and the second layer and third layer are 5x5, remaining each layer is 3x3;The step-length of convolutional layer is respectively: preceding
Two convolutional layers are 2, remaining is all 1;The size of down-sampling layer is all 3x3, and step-length is all 2;
The model divided automatically is trained by CNN method particularly includes:
(1) by the convolutional layer of CNN and the automatic learning characteristic of down-sampling layer, and feature is extracted, specific steps are as follows:
Step A: in a convolutional layer, upper one layer of feature maps carries out convolution by a convolution kernel that can learn, so
As soon as output feature map can be obtained by activation primitive afterwards;Each output is convolution nuclear convolution one input or combination
The value (what we selected here is to combine the multiple values for entering and leaving maps of convolution) of multiple convolution inputs:
Wherein, symbol * indicates convolution operator;The l indicates the number of plies;The i indicates l-1 layers of i-th of neuron section
Point;The j indicates l layers of j-th of neuron node;The MjIndicate the set of the input maps of selection;It is describedRefer to l-
1 layer of output, as l layers of input;The f is activation primitive, takes sigmoid function hereAs activation
Function, e indicate Euler's numbers 2.718281828, exIt is exactly exponential function;The k is convolution operator;The b is biasing;Each
Map, the convolution kernel of each input maps of convolution can specifically be exported for one to an additional biasing b by exporting map
It is all different;
This step also needs to carry out gradient calculating, and to update sensitivity, how much sensitivity is for indicating b variation, error meeting
Variation is how much:
Wherein, the l indicates the number of plies;The j indicates l layers of j-th of neuron node;The o indicates each element phase
Multiply;The δ indicates the sensitivity of output neuron, that is, biases the change rate of b;The sl=Wlxl-1+bl, xl-1Refer to l-1 layers
Output, W are weight, and b is biasing;The f is activation primitive, takes sigmoid function hereAs activation letter
Number, e indicate Euler's numbers 2.718281828, exIt is exactly exponential function;F " (x) is the derived function of f (x) (i.e. if f takes sigmoid
FunctionThen f ' (x)=(1-f (x)) f (x));
It is describedIndicate the shared weight of each layer;One up-sampling operation of the up () expression (if down-sampling
If decimation factor is n, up-sampling operation is exactly n times will to be copied in each pixel level and vertical direction, can thus be restored
Size originally);
Then it sums to all nodes in the sensitivity map in l layers, quickly calculates the gradient of biasing b:
Wherein, the l indicates the number of plies;The j indicates l layers of j-th of neuron node;The b indicates biasing;The δ
It indicates the sensitivity of output neuron, that is, biases the change rate of b;The u, v indicate position (u, v) of output maps;The E is
Error function, hereThe C indicates the dimension of label, and the problem of if it is two classification, then label is just
Y can be denoted ash∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2;It is describedIndicate n-th
The h of a sample corresponding label is tieed up;It is describedIndicate h-th of output of the corresponding network output of n-th of sample;
BP algorithm is finally utilized, the weight of convolution kernel is calculated:
Wherein, the W is weight parameter;The E is error function, andThe C indicates label
Dimension, if it is two classification the problem of, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈
{ (0,1), (1,0) }, at this time C=2;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate n-th of sample
H-th of output of corresponding network output;The η is learning rate, i.e. step-length;Due to the weight much connected be it is shared, because
This weight given for one needs to seek gradient to the point with the associated connection of the weight to all, then to these ladders
Degree is summed:
Wherein, the l indicates the number of plies;The i indicates l layers of i-th of neuron node;The j indicates j-th of l layers
Neuron node;B indicates biasing, and the δ indicates the sensitivity of output neuron, that is, biases the change rate of b;The u, v are indicated
Export position (u, v) of maps;The E is error function, hereThe C indicates the dimension of label,
The problem of if it is two classification, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ (0,1), (1,
0) }, C=2 at this time;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate the corresponding network of n-th of sample
H-th of output of output;It is describedIt is convolution kernel;It is describedIt isIn element when convolution withBy element
The patch of multiplication, i.e., all region units in all pictures identical with convolution kernel size, exports position (u, v) of convolution map
Value be by the patch and convolution kernel of upper one layer of position (u, v)By the result of element multiplication;
Step B: down-sampling layer has N number of input maps, just there is N number of output maps, and only each output map becomes smaller,
Then have:
Wherein, the f is activation primitive, takes sigmoid function hereAs activation primitive, e indicates Europe
Draw number 2.718281828, exIt is exactly exponential function;It is describedIndicate the shared weight of each layer;The down () indicates one
Down-sampling function;It sums to all pixels of the block of the different nxn of input picture, exports image on two dimensions in this way
All reduce n times (be here exactly the block that each element of input picture is taken to a fixed 3x3 size, then will wherein all members
Value of the element summation as the element in the output image, so that output image all reduces 3 times on two dimensions);Often
The all corresponding one's own weight parameter β (biasing of multiplying property) of a output map and an additivity bias b;
By gradient descent method come undated parameter β and b:
Wherein, the conv2 is two-dimensional convolution operator;The rot180 is rotation 180 degree;It is described ' full ' refer to progress
Complete convolution;The l indicates the number of plies;The i indicates l layers of i-th of neuron node;The j indicates l layers of j-th of nerve
First node;The b indicates biasing;The δ indicates the sensitivity of output neuron, that is, biases the change rate of b;The u, v are indicated
Export position (u, v) of maps;The E is error function, and expression formula is same as above, i.e.,The C indicates mark
The dimension of label, the problem of classification if it is two, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈
{ (0,1), (1,0) }, at this time C=2;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate n-th of sample
H-th of output of corresponding network output;The β is weight parameter (general value is in [0,1]);The down () indicates one
A down-sampling function;It is describedIt is l+1 layers of convolution kernel;It is describedJ-th of neuron section of the output for the l-1 layer for being
Point;The sl=Wlxl-1+bl, wherein W is weight parameter, and b is biasing,It is slJ-th of component;
The combination of the automatic learning characteristic map of step C:CNN, then j-th of feature map combination are as follows:
s.t.∑iαij0≤α of=1, andij≤1.
Wherein, symbol * indicates convolution operator;The l indicates the number of plies;The i indicates l layers of i-th of neuron node;
The j indicates l layers of j-th of neuron node;The f is activation primitive, takes sigmoid function hereMake
For activation primitive, e indicates Euler's numbers 2.718281828, exIt is exactly exponential function;It is describedIt is i-th point of l-1 layers of output
Amount;The NinIndicate the map number of input;It is describedIt is convolution kernel;It is describedIt is biasing;The αijIndicate l-1 layers of output
When map is as l layers of input, the weight of l-1 layers of wherein i-th input map for obtaining j-th of output map or contribution;
(2) it utilizes the feature combination Softmax extracted in (1) to automatically identify tubercle, determines the model divided automatically;
As soon as specific Softmax identification process is exactly given sample, a probability value is exported, what which indicated is this sample
Belong to several probability of classification, loss function are as follows:
Wherein, the m indicates to share m sample;The c indicates that these samples can be divided into c class in total;It is described
It is a matrix, every a line is parameter corresponding to a classification, i.e. weight and biasing;1 { } is an indicative letter
Number, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is balance fidelity term (the
One) with the parameter of regular terms (Section 2), λ takes positive number (adjusting its size according to experimental result) here;The J (θ) refers to
The loss function of system;The e indicates Euler's numbers 2.718281828, exIt is exactly exponential function;The T is that representing matrix calculates
In transposition operator;Log indicates natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;The dimension of n expression weight and offset parameter
Degree;x(i)It is the i-th dimension of input vector;y(i)It is the i-th dimension of each sample label;Then it is solved using gradient:
Wherein,The m indicates to share m sample;It is describedIt is
One matrix, every a line are parameters corresponding to a classification, i.e. weight and biasing;1 { } is an indicative function,
I.e. when the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is balance fidelity term (first
) with the parameter of regular terms (Section 2), λ takes positive number (adjusting its size according to experimental result) here;The J (θ), which refers to, is
The loss function of system;It is J (θ) derived function;The e indicates Euler's numbers 2.718281828, exIt is exactly exponential function;Institute
Stating T is the transposition operator during representing matrix calculates;Log indicates natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;x(i)It is defeated
The i-th dimension of incoming vector;y(i)It is the i-th dimension of each sample label;
(used herein is a kind of new Softmax classifier, i.e., the Softmax classifier of only two classification, for one
It opens for thyroid gland picture, the probability provided according to softmax is available by all knuckle areas and non-nodules region area
A separated probability graph, according to the available coarse segmentation to knuckle areas of this figure;)
(3) the thyroid tubercle of the automatic divided ownership of CNN is utilized, that is, distinguishes knuckle areas and non-nodules region, finds
The boundary of knuckle areas, and the nodule shape being partitioned into is refined, i.e., it carries out filling out hole by burn into expansion form operator
And remove connection with non-nodules region;
Step 3: all thyroid nodule pictures (i.e. 10000 pictures) are carried out certainly using the model that step 2 obtains
Dynamic segmentation, obtains ROI, i.e., all good Malignant Nodules;
The process four specifically: the ROI that process three is partitioned into automatically is divided into p group, data are normalized,
It is partitioned into after tubercle automatically, extracts the feature of tubercle, linear transformation is carried out to these features, is mapped to end value
[0,1];
The process five specifically: using CNN training identification model, feature (detailed process and process are extracted to all ROI
The method of extraction characteristic procedure is the same in three automatic segmentations, and only object here is just for knuckle areas, network
Few three convolutional layers when structure is than automatic segmentation, more 3 layers of full articulamentum, neuron node numbers are respectively 64,64,1;Convolution
It is 13x13 that the size of core, which is respectively as follows: first layer, and the second layer and third layer are 5x5, remaining each layer is 3x3;Step-length is respectively: preceding
Three convolutional layers are 2, remaining is all 1;The size of down-sampling layer is all 3x3, and step-length is all 2;And automatic partitioning portion is needle
Feature is extracted simultaneously to non-nodules region and knuckle areas);
Then classified using a kind of new Softmax, i.e., the Softmax classifier of only two classification solves a loss
The classification number c of the optimal value of function, i.e. optimization J (θ), Softmax classifier is equal to 2 (i.e. benign protuberances and Malignant Nodules);It is logical
The probability for belonging to benign protuberance or Malignant Nodules can be obtained by crossing gradient descent method, be divided automatically in detailed process and process three
The method for cutting process is the same (as soon as being only here exactly to go out a tag along sort according to these probabilistic forecastings, also ties to one
Section has carried out good pernicious diagnosis);
The process six specifically: repetitive process five selects the training of p-1 group data that is, for p group data every time, remaining
Test, the optimal parameter of identification model is finally obtained, to just obtain based on depth convolutional neural networks automatic identification first
The good pernicious assistant diagnosis system of shape gland tubercle;
The thyroid nodule image identified will be needed to be input to this assistant diagnosis system, can be obtained the good evil of the tubercle
Property diagnosis.
Compared with prior art, the beneficial effects of the present invention are:
The present invention can not be partitioned into thyroid nodule automatically only by depth convolutional neural networks, compensate for based on work
Driving wheel exterior feature etc. not can solve the deficiency of weak boundary problem, and can learn to extract valuable feature combination out automatically, keep away
The complexity for having exempted from artificial selected characteristic, the feature extracted in this way, which is more advantageous to, finds the pernicious main rule of Benign Thyroid Nodules
Information is restrained, improves the accuracy rate of identifying system, and obtain the adaptability of height.
Detailed description of the invention
Fig. 1 is that the pernicious flow chart of Benign Thyroid Nodules is identified based on depth convolutional neural networks.
Fig. 2 is the convolutional neural networks structure chart of automatic segmentation with identification thyroid nodule.
Fig. 3 is the original image of thyroid nodule used in embodiment.
The mask picture in thyroid nodule region in Fig. 3 that Fig. 4 draws for expert.
Fig. 5 is the original image of thyroid nodule in embodiment.
Fig. 6 is the effect picture for being partitioned into the knuckle areas Fig. 5 automatically using CNN.
Specific embodiment
Present invention is further described in detail with specific embodiment with reference to the accompanying drawing:
The following examples can make the professional technician of this profession that the present invention be more fully understood, but not with any side
The formula limitation present invention.
A method of it is pernicious based on depth convolutional neural networks automatic identification Benign Thyroid Nodules, as shown in Figure 1, including
Following steps:
One, the B ultrasound data of thyroid nodule are read;
Two, thyroid nodule image is pre-processed;
Three, it chooses image (including good pernicious nodule image as many) and utilizes convolutional neural networks
(convolutional neural network (CNN)) study is partitioned into tubercle part and non-nodules part, tuberal part automatically
Dividing is exactly area-of-interest (region of interest (ROI)), and is refined to nodule shape;
Four, the ROI for extracting step 3 is divided into p group, the feature of these ROI is extracted using CNN, and carry out
Normalization.
Five, p-1 group data in step 4 are selected and do training set, remaining one group is tested, and is trained model by CNN and is carried out
Test;
Six, step 5 is repeated, p crosscheck is done, obtains the optimal parameter of identification model, final determine is rolled up based on depth
The pernicious assistant diagnosis system of product neural network recognition Benign Thyroid Nodules;
The process one specifically: the B ultrasound data for reading thyroid nodule can be picture format, be also possible to standard
Dicom picture.The image of image and at least 5000 Malignant Nodules including at least 5000 benign protuberances;In the process of progress
When five, need first to read in all pictures (i.e. p-1 group data) in training set train it is automatic based on depth convolutional neural networks
Identify the pernicious assistant diagnosis system of Benign Thyroid Nodules, then reading in remaining 1 group of data test, this is.Using the system into
When the pernicious auxiliary diagnosis of row automatic identification Benign Thyroid Nodules, all pictures for the tubercle to be diagnosed need to be only read in;
The process two specifically: the thyroid nodule image for reading process one first carries out image gray processing, and utilize
It is the label for measuring tubercle correlative and doing that the gray value of surrounding pixel point, which removes doctor in ultrasound image, and gaussian filtering is recycled to go
It makes an uproar, finally enhances contrast using gray-level histogram equalizationization, obtain pretreated enhancing image;
The process three specifically:
Step 1: selection is opened through the pretreated enhancing image 10000 of process two, including good Malignant Nodules each 5000;
Step 2: tubercle part and non-nodules part are intercepted out by expert, the mould divided automatically is then trained by CNN
Type;CNN described here is exactly by 13 layers of convolutional layer, the network structure of 2 layers of down-sampling layer composition, the size difference of convolution kernel
Are as follows: first layer 13x13, the second layer and third layer are 5x5, remaining each layer is 3x3, and step-length is respectively: the first two convolutional layer is
2, remaining is all 1.The size of down-sampling layer is all 3x3, and step-length is all 2;Specific convolutional neural networks structure such as Fig. 2 institute
Show;
The model divided automatically is trained by CNN method particularly includes:
(1) by the convolutional layer of CNN and the automatic learning characteristic of down-sampling layer, and feature is extracted, specific steps are as follows:
Step A: in a convolutional layer, upper one layer of feature maps carries out convolution by a convolution kernel that can learn, so
As soon as output feature map can be obtained by activation primitive afterwards;Each output is convolution nuclear convolution one input or combination
The value (what we selected here is to combine the multiple values for entering and leaving maps of convolution) of multiple convolution inputs:
Wherein, symbol * indicates convolution operator;Described 1 indicates the number of plies;The i indicates l-1 layers of i-th of neuron section
Point;The j indicates l layers of j-th of neuron node;The MjIndicate the set of the input maps of selection;It is describedIt is output;
It is describedThe output for referring to l-1 layers, as l1 layers of input;The f is activation primitive, takes sigmoid function hereAs activation primitive;The e indicates Euler's numbers 2.718281828, exIt is exactly exponential function;The k is convolution
Operator;The b is biasing;Each output map can give an additional biasing b, but specifically export map for one,
The convolution kernel of each input maps of convolution is different;
This step also needs to carry out gradient calculating, and to update sensitivity, how much sensitivity is for indicating b variation, error meeting
Variation is how much:
Wherein, the l indicates the number of plies;The j indicates l layers of j-th of neuron node;The o indicates each element phase
Multiply;The δ indicates the sensitivity of output neuron, that is, biases the change rate of b;The sl=Wlxl-1+bl;The W is weight;Institute
B is stated as biasing;The f is activation primitive, takes sigmoid function hereAs activation primitive;The e is indicated
Euler's numbers 2.718281828, exIt is exactly exponential function, f ' (x) is the derived function of f (x), if f takes sigmoid functionThen f ' (x)=(1-f (x)) f (x);
It is describedIndicate the shared weight of each layer;The up () indicates a up-sampling operation, if down-sampling is adopted
If like factor is n, up-sampling operation is exactly n times will to be copied in each pixel level and vertical direction, can thus restore former
The size come;
Then it sums to all nodes in the sensitivity map in l layers, quickly calculates the gradient of biasing b:
Wherein, the l indicates the number of plies;The j indicates l layers of j-th of neuron node;The b indicates biasing;The δ
It indicates the sensitivity of output neuron, that is, biases the change rate of b;The u, v indicate position (u, v) of output maps;The E is
Error function, hereThe C indicates the dimension of label, and the problem of if it is two classification, then label is just
Y can be denoted ash∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2;It is describedIndicate n-th
The h of a sample corresponding label is tieed up;It is describedIndicate h-th of output of the corresponding network output of n-th of sample;
BP algorithm is finally utilized, the weight of convolution kernel is calculated:
Wherein, the W is weight parameter;The E is error function, andThe C indicates label
Dimension, if it is two classification the problem of, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈
{ (0,1), (1,0) }, at this time C=2;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate n-th of sample
H-th of output of corresponding network output;The η is learning rate, i.e. step-length;Due to the weight much connected be it is shared, because
This weight given for one needs to seek gradient to the point with the associated connection of the weight to all, then to these ladders
Degree is summed:
Wherein, the l indicates the number of plies;The i indicates l layers of i-th of neuron node;The j indicates j-th of l layers
Neuron node;B indicates biasing, and described 6 indicate the sensitivity of output neuron, that is, biases the change rate of b;The u, v are indicated
Export position (u, v) of maps;The E is error function, hereThe C indicates the dimension of label,
The problem of if it is two classification, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ (0,1), (1,
0) }, C=2 at this time;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate the corresponding network of n-th of sample
H-th of output of output;It is describedIt is convolution kernel;It is describedIt isIn element when convolution withBy element
The patch of multiplication, i.e., all region units in all pictures identical with convolution kernel size, exports position (u, v) of convolution map
Value be by the patch and convolution kernel of upper one layer of position (u, v)By the result of element multiplication;
Step B: down-sampling layer has N number of input maps, just there is N number of output maps, and only each output map becomes smaller,
Then have:
Wherein, the f is activation primitive, takes sigmoid function hereAs activation primitive, e indicates Europe
Draw number 2.718281828, exIt is exactly exponential function;It is describedIndicate the shared weight of each layer;The down () indicates under one
Sampling function;It sums to all pixels of the block of the different nxn of input picture, exports image so on two dimensions all
Reduce n times (be here exactly the block that each element of input picture is taken to a fixed 3x3 size, then will wherein all elements
The value summed as the element in the output image, so that output image all reduces 3 times on two dimensions);Each
It exports all corresponding one's own weight parameter β (biasing of multiplying property) of map and an additivity biases b;
By gradient descent method come undated parameter β and b:
Wherein, the conv2 is two-dimensional convolution operator;The rot180 is rotation 180 degree;It is described ' full ' refer to progress
Complete convolution;The l indicates the number of plies;The i indicates l layers of i-th of neuron node;The j indicates l layers of j-th of nerve
First node;The b indicates biasing;The δ indicates the sensitivity of output neuron, that is, biases the change rate of b;The u, v are indicated
Export position (u, v) of maps;The E is error function, and expression formula is same as above, i.e.,The C indicates mark
The dimension of label, the problem of classification if it is two, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈
{ (0,1), (1,0) }, at this time C=2;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate n-th of sample
H-th of output of corresponding network output;The β is weight parameter (general value is in [0,1]);The down () indicates one
A down-sampling function;It is describedIt is l+1 layers of convolution kernel;It is describedJ-th of neuron section of the output for the l-1 layer for being
Point;The sl=Wlxl-1+bl, wherein W is weight parameter, and b is biasing,It is slJ-th of component;
The combination of the automatic learning characteristic map of step C:CNN, then j-th of feature map combination are as follows:
s.t.∑iαij0≤α of=1, andij≤1.
Wherein, symbol * indicates convolution operator;The l indicates the number of plies;The i indicates l layers of i-th of neuron node;
The j indicates l layers of j-th of neuron node;The f is activation primitive, takes sigmoid function hereMake
For activation primitive, e indicates Euler's numbers 2.718281828, exIt is exactly exponential function;It is describedIt is i-th point of l-1 layers of output
Amount;The NinIndicate the map number of input;It is describedIt is convolution kernel;It is describedIt is biasing;The αijIndicate l-1 layers of output
When map is as l layers of input, the weight of l-1 layers of wherein i-th input map for obtaining j-th of output map or contribution;
(2) it utilizes the feature combination Softmax extracted in (1) to automatically identify tubercle, determines the model divided automatically;
As soon as specific Softmax identification process is exactly given sample, a probability value is exported, what which indicated is this sample
Belong to several probability of classification, loss function are as follows:
Wherein, the m indicates to share m sample;The c indicates that these samples can be divided into c class in total;It is described
It is a matrix, every a line is parameter corresponding to a classification, i.e. weight and biasing;1 { } is an indicative letter
Number, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is balance fidelity term (the
One) with the parameter of regular terms (Section 2), λ takes positive number (adjusting its size according to experimental result) here;The J (θ) refers to
The loss function of system;The e indicates Euler's numbers 2.718281828, exIt is exactly exponential function;The T is that representing matrix calculates
In transposition operator;Log indicates natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;The dimension of n expression weight and offset parameter
Degree;x(i)It is the i-th dimension of input vector;y(i)It is the i-th dimension of each sample label;Then it is solved using gradient:
Wherein,The m indicates to share m sample;It is describedIt is one
A matrix, every a line are parameters corresponding to a classification, i.e. weight and biasing;1 { } is an indicative function, i.e.,
When the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is balance fidelity term (first item)
With the parameter of regular terms (Section 2), λ takes positive number (adjusting its size according to experimental result) here;The J (θ) refers to system
Loss function;It is J (θ) derived function;The e indicates Euler's numbers 2.718281828, exIt is exactly exponential function;The T
It is the transposition operator during representing matrix calculates;Log indicates natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;x(i)Be input to
The i-th dimension of amount;y(i)It is the i-th dimension of each sample label;
(used herein is a kind of new Softmax classifier, i.e., the Softmax classifier of only two classification, for one
It opens for thyroid gland picture, the probability provided according to softmax is available by all knuckle areas and non-nodules region area
A separated probability graph, according to the available coarse segmentation to knuckle areas of this figure;)
(3) the thyroid tubercle of the automatic divided ownership of CNN is utilized, that is, distinguishes knuckle areas and non-nodules region, finds
The boundary of knuckle areas, and the nodule shape being partitioned into is refined, i.e., it carries out filling out hole by burn into expansion form operator
And remove connection with non-nodules region;
Step 3: all thyroid nodule pictures (i.e. 10000 pictures) are carried out certainly using the model that step 2 obtains
Dynamic segmentation, obtains ROI, i.e., all good Malignant Nodules;
The process four specifically: the ROI that process three is partitioned into automatically is divided into p group, data are normalized,
It is partitioned into after tubercle automatically, extracts the feature of tubercle, linear transformation is carried out to these features, is mapped to end value
[0,1];
The process five specifically: using CNN training identification model, feature (detailed process and process are extracted to all ROI
The method of extraction characteristic procedure is the same in three automatic segmentations, and only object here is just for knuckle areas, network
Few three convolutional layers when structure is than automatic segmentation, more 3 layers of full articulamentum, neuron node numbers are respectively 64,64,1;Convolution
It is 13x13 that the size of core, which is respectively as follows: first layer, and the second layer and third layer are 5x5, remaining each layer is 3x3;Step-length is respectively: preceding
Three convolutional layers are 2, remaining is all 1;The size of down-sampling layer is all 3x3, and step-length is all 2;And automatic partitioning portion is needle
Feature is extracted simultaneously to non-nodules region and knuckle areas);Specific convolutional neural networks structure is as shown in Figure 2;
Then classified using a kind of new Softmax, i.e., the Softmax classifier of only two classification solves a loss
The classification number p of the optimal value of function, i.e. optimization J (θ), Softmax classifier is equal to 2 (i.e. benign protuberances and Malignant Nodules);It is logical
The probability for belonging to benign protuberance or Malignant Nodules can be obtained by crossing gradient descent method, be divided automatically in detailed process and process three
The method for cutting process is the same (as soon as being only here exactly to go out a tag along sort according to these probabilistic forecastings, also ties to one
Section has carried out good pernicious diagnosis);
The process six specifically: the experiment of repetitive process five selects p-1 group data instruction that is, for p group data every time
Practice, it is remaining to test, the optimal parameter of identification model is finally obtained, to just obtain automatic based on depth convolutional neural networks
Identify the pernicious assistant diagnosis system of Benign Thyroid Nodules.The thyroid nodule image identified will be needed to be input to this auxiliary to examine
Disconnected system, can be obtained the good pernicious diagnosis of the tubercle.
Fig. 3, Fig. 4 are to illustrate the original image of thyroid nodule used and the mask of corresponding knuckle areas figure in experiment
Piece;Fig. 5, Fig. 6 are illustrated the original image of a thyroid nodule and are partitioned into the effect of knuckle areas mask automatically using CNN
Picture.
Finally it should be noted that the above enumerated are only specific embodiments of the present invention.It is clear that the invention is not restricted to
Above embodiments can also have many variations.Those skilled in the art can directly lead from present disclosure
Out or all deformations for associating, it is considered as protection scope of the present invention.
Claims (1)
1. based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules, which is characterized in that
Method for building up includes following processes:
One, the B ultrasound data of thyroid nodule are read;
Two, thyroid nodule image is pre-processed;
Three, it chooses image and utilizes convolutional neural networks, i.e. CNN, automatic study is partitioned into tubercle part and non-nodules part, tubercle
Part is exactly area-of-interest, i.e. ROI, and is refined to nodule shape;
Four, the ROI for extracting step 3 is divided into p group, the feature of these ROI is extracted using CNN, and carry out normalizing
Change;
Five, p-1 group data in step 4 are selected and do training set, remaining one group is tested, and is trained identification model by CNN and is carried out
Test;
Six, step 5 is repeated, p crosscheck is done, obtains the optimal parameter of identification model, it is final to determine based on depth convolution mind
Through the pernicious assistant diagnosis system of network automatic identification Benign Thyroid Nodules;
The process one specifically: read thyroid nodule image, image including at least 5000 benign protuberances and at least
The image of 5000 Malignant Nodules;
The process two specifically: the thyroid nodule image for reading process one first carries out image gray processing, and utilizes surrounding
It is the label for measuring tubercle correlative and doing that the gray value of pixel, which removes doctor in ultrasound image, recycles gaussian filtering denoising,
Finally enhance contrast using gray-level histogram equalizationization, obtains pretreated enhancing image;
The process three specifically:
Step 1: selection is opened through the pretreated enhancing image 10000 of process two, including good Malignant Nodules each 5000;
Step 2: to each picture, Manual interception first goes out tubercle part and non-nodules part, is then come from by CNN training
The model of dynamic segmentation;
The network structure that the CNN is made of 13 layers of convolutional layer, 2 layers of down-sampling layer;The size of the convolution kernel of convolutional layer is distinguished
Are as follows: first layer 13x13, the second layer and third layer are 5x5, remaining each layer is 3x3;The step-length of convolutional layer is respectively: the first two
Convolutional layer is 2, remaining is all 1;The size of down-sampling layer is all 3x3, and step-length is all 2;
The model divided automatically is trained by CNN method particularly includes:
(1) by the convolutional layer of CNN and the automatic learning characteristic of down-sampling layer, and feature is extracted, specific steps are as follows:
Step A: in a convolutional layer, upper one layer of feature maps carries out convolution by a convolution kernel that can learn, then leads to
As soon as crossing an activation primitive, output feature map can be obtained;Each output is convolution nuclear convolution one input or combines multiple
The value of convolution input:
Wherein, symbol * indicates convolution operator;The l indicates the number of plies;The i indicates l-1 layers of i-th of neuron node;Institute
Stating j indicates l layers of j-th of neuron node;The MjIndicate the set of the input maps of selection;It is describedRefer to l-1 layers
Output, as l layers of input;The f is activation primitive, takes sigmoid function hereAs activation primitive, e
Indicate Euler's numbers 2.718281828, exIt is exactly exponential function;The k is convolution operator;The b is biasing;Each output
Map can give an additional biasing b, but output map specific for one, the convolution kernel of each input maps of convolution are
It is different;
This step also needs to carry out gradient calculating, and to update sensitivity, for how much indicating b variation, error can change for sensitivity
How many:
Wherein, the l indicates the number of plies;The j indicates l layers of j-th of neuron node;The o indicates each element multiplication;Institute
Stating δ indicates the sensitivity of output neuron, that is, biases the change rate of b;The sl=Wlxl-1+bl, xl-1Refer to l-1 layers of output,
W is weight, and b is biasing;The f is activation primitive, takes sigmoid function hereAs activation primitive, e table
Show Euler's numbers 2.718281828, exIt is exactly exponential function;F " (x) is the derived function of f (x);It is describedIndicate what each layer was shared
Weight;The up () indicates a up-sampling operation;
Then it sums to all nodes in the sensitivity map in l layers, quickly calculates the gradient of biasing b:
Wherein, the l indicates the number of plies;The j indicates l layers of j-th of neuron node;The b indicates biasing;The δ is indicated
The sensitivity of output neuron biases the change rate of b;The u, v indicate position (u, v) of output maps;The E is error
Function, hereThe C indicates the dimension of label, the problem of if it is two classification, then label
It is denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2;It is describedIndicate n-th of sample
The h of this corresponding label is tieed up;It is describedIndicate h-th of output of the corresponding network output of n-th of sample;
BP algorithm is finally utilized, the weight of convolution kernel is calculated:
Wherein, the W is weight parameter;The E is error function, andThe C indicates the dimension of label
Number, the problem of classification if it is two, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ (0,1),
(1,0) }, C=2 at this time;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate that n-th of sample is corresponding
H-th of output of network output;The η is learning rate, i.e. step-length;Due to the weight much connected be it is shared, for
One given weight is needed to seek gradient to the point with the associated connection of the weight to all, then be carried out to these gradients
Summation:
Wherein, the l indicates the number of plies;The i indicates l layers of i-th of neuron node;The j indicates l layers of j-th of nerve
First node;B indicates biasing, and the δ indicates the sensitivity of output neuron, that is, biases the change rate of b;The u, v indicate output
Position (u, v) of maps;The E is error function, hereThe C indicates the dimension of label, if
The problem of being two classification, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) },
C=2 at this time;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate the corresponding network output of n-th of sample
H-th output;It is describedIt is convolution kernel;It is describedIt isIn element when convolution withBy element multiplication
Patch, i.e., all region units in all pictures identical with convolution kernel size export the value of position (u, v) of convolution map
It is by the patch and convolution kernel of upper one layer of position (u, v)By the result of element multiplication;
Step B: down-sampling layer has N number of input maps, just there is N number of output maps, and only each output map becomes smaller, then has:
Wherein, the f is activation primitive, takes sigmoid function hereAs activation primitive, e indicates Euler's numbers
2.718281828 exIt is exactly exponential function;It is describedIndicate the shared weight of each layer;The down () indicates a down-sampling
Function;It sums to all pixels of the block of the different nxn of input picture, output image so all reduces on two dimensions
N times;Each output map corresponds to an one's own weight parameter β and an additivity biases b;
By gradient descent method come undated parameter β and b:
Wherein, the conv2 is two-dimensional convolution operator;The rot180 is rotation 180 degree;It is described ' full ' refer to that progress is complete
Convolution;The l indicates the number of plies;The i indicates l layers of i-th of neuron node;The j indicates l layers of j-th of neuron section
Point;The b indicates biasing;The δ indicates the sensitivity of output neuron, that is, biases the change rate of b;The u, v indicate output
Position (u, v) of maps;The E is error function, and expression formula is same as above, i.e.,The C indicates label
Dimension, the problem of classification if it is two, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ (0,
1), (1,0) }, C=2 at this time;It is describedIndicate the h dimension of n-th of sample corresponding label;It is describedIndicate that n-th of sample is corresponding
Network output h-th output;The β is weight parameter;The down () indicates a down-sampling function;It is describedIt is
L+1 layers of convolution kernel;It is describedJ-th of neuron node of the output for the l-1 layer for being;The sl=Wlxl-1+bl, wherein W
It is weight parameter, b is biasing,It is slJ-th of component;
The combination of the automatic learning characteristic map of step C:CNN, then j-th of feature map combination are as follows:
s.t.∑iαij0≤α of=1, andij≤1.
Wherein, symbol * indicates convolution operator;The l indicates the number of plies;The i indicates l layers of i-th of neuron node;It is described
J indicates l layers of j-th of neuron node;The f is activation primitive, takes sigmoid function hereAs sharp
Function living, e indicate Euler's numbers 2.718281828, exIt is exactly exponential function;It is describedIt is i-th of component of l-1 layers of output;
The NinIndicate the map number of input;It is describedIt is convolution kernel;It is describedIt is biasing;The αijIndicate that l-1 layers of output map makees
When for l layers of input, the weight of l-1 layers of wherein i-th input map for obtaining j-th of output map or contribution;
(2) it utilizes the feature combination Softmax extracted in (1) to automatically identify tubercle, determines the model divided automatically;Specifically
As soon as Softmax identification process is exactly given sample, a probability value is exported, what which indicated is that this sample belongs to
Several probability of classification, loss function are as follows:
Wherein, the m indicates to share m sample;The c indicates that these samples can be divided into c class in total;It is describedIt is one
Matrix, every a line are parameters corresponding to a classification, i.e. weight and biasing;1 { } is an indicative function, that is, is worked as
When value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is balance fidelity term and regular terms
Parameter, λ takes positive number here;The J (θ) refers to the loss function of system;The e indicates Euler's numbers 2.718281828, exIt is exactly
Exponential function;The T is the transposition operator during representing matrix calculates;Log indicates natural logrithm, i.e., using Euler's numbers as pair at bottom
Number;The dimension of n expression weight and offset parameter;x(i)It is the i-th dimension of input vector;y(i)It is the i-th dimension of each sample label;So
It is solved afterwards using gradient:
Wherein,The m indicates to share m sample;It is described
It is a matrix, every a line is parameter corresponding to a classification, i.e. weight and biasing;1 { } is an indicative letter
Number, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ be balance fidelity term with just
The then parameter of item, λ takes positive number here;The J (θ) refers to the loss function of system;It is J (θ) derived function;The e table
Show Euler's numbers 2.718281828, exIt is exactly exponential function;The T is the transposition operator during representing matrix calculates;Log is indicated
Natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;x(i)It is the i-th dimension of input vector;y(i)It is the i-th dimension of each sample label;
(3) the thyroid tubercle of the automatic divided ownership of CNN is utilized, that is, distinguishes knuckle areas and non-nodules region, finds tubercle
The boundary in region, and the nodule shape being partitioned into is refined, i.e., by burn into expansion form operator fill out hole and
Remove the connection with non-nodules region;
Step 3: the model obtained using step 2 divides all thyroid nodule pictures automatically, obtains ROI, i.e. institute
The good Malignant Nodules having;
The process four specifically: the ROI that process three is partitioned into automatically is divided into p group, data are normalized, i.e., certainly
It is dynamic to be partitioned into after tubercle, the feature of tubercle is extracted, linear transformation is carried out to these features, end value is made to be mapped to [0,1];
The process five specifically: using CNN training identification model, feature is extracted to all ROI;
Then classified using a kind of new Softmax, i.e., the Softmax classifier of only two classification solves a loss function
Optimal value, i.e. optimization J (θ), the classification number c of Softmax classifier is equal to 2;It can be belonged to by gradient descent method
The probability of benign protuberance or Malignant Nodules, the method for automatic cutting procedure is in detailed process and process three;
The process six specifically: repetitive process five selects the training of p-1 group data that is, for p group data every time, remaining to do
Test, finally obtains the optimal parameter of identification model, to just obtain based on depth convolutional neural networks automatic identification thyroid gland
The good pernicious assistant diagnosis system of tubercle.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2015108619029 | 2015-11-30 | ||
CN201510861902 | 2015-11-30 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106056595A CN106056595A (en) | 2016-10-26 |
CN106056595B true CN106056595B (en) | 2019-09-17 |
Family
ID=57175505
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610362069.8A Active CN106056595B (en) | 2015-11-30 | 2016-05-26 | Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106056595B (en) |
Families Citing this family (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020513274A (en) * | 2016-11-09 | 2020-05-14 | サントル・ナショナル・ドゥ・ラ・ルシェルシュ・シャンティフィクCentre National De La Recherche Scientifique | A multiparameter method for quantifying balance. |
CN106709421B (en) * | 2016-11-16 | 2020-03-31 | 广西师范大学 | Cell image identification and classification method based on transform domain features and CNN |
CN106780448B (en) * | 2016-12-05 | 2018-07-17 | 清华大学 | A kind of pernicious categorizing system of ultrasonic Benign Thyroid Nodules based on transfer learning and Fusion Features |
CN106846301B (en) * | 2016-12-29 | 2020-06-23 | 北京理工大学 | Retina image classification method and device |
EP3355270B1 (en) * | 2017-01-27 | 2020-08-26 | AGFA Healthcare | Multi-class image segmentation method |
CN106971198A (en) * | 2017-03-03 | 2017-07-21 | 北京市计算中心 | A kind of pneumoconiosis grade decision method and system based on deep learning |
CN108573491A (en) * | 2017-03-10 | 2018-09-25 | 南京大学 | A kind of three-dimensional ultrasound pattern dividing method based on machine learning |
CN107358606B (en) * | 2017-05-04 | 2018-07-27 | 深圳硅基仿生科技有限公司 | The artificial neural network device and system and device of diabetic retinopathy for identification |
CN107423571B (en) * | 2017-05-04 | 2018-07-06 | 深圳硅基仿生科技有限公司 | Diabetic retinopathy identifying system based on eye fundus image |
CN107066759B (en) * | 2017-05-12 | 2020-12-01 | 华北电力大学(保定) | Steam turbine rotor vibration fault diagnosis method and device |
CN107316004A (en) * | 2017-06-06 | 2017-11-03 | 西北工业大学 | Space Target Recognition based on deep learning |
CN107194929B (en) * | 2017-06-21 | 2020-09-15 | 太原理工大学 | Method for tracking region of interest of lung CT image |
CN107247971B (en) * | 2017-06-28 | 2020-10-09 | 中国人民解放军总医院 | Intelligent analysis method and system for ultrasonic thyroid nodule risk index |
CN107529645B (en) * | 2017-06-29 | 2019-09-10 | 重庆邮电大学 | A kind of heart sound intelligent diagnosis system and method based on deep learning |
CN107424152B (en) * | 2017-08-11 | 2020-12-18 | 联想(北京)有限公司 | Detection device for organ lesion, method for training neural network and electronic equipment |
CN107492099B (en) * | 2017-08-28 | 2021-08-20 | 京东方科技集团股份有限公司 | Medical image analysis method, medical image analysis system, and storage medium |
CN107680678B (en) * | 2017-10-18 | 2020-12-01 | 北京航空航天大学 | Thyroid ultrasound image nodule diagnosis system based on multi-scale convolution neural network |
CN107886506A (en) * | 2017-11-08 | 2018-04-06 | 华中科技大学 | A kind of ultrasonic thyroid nodule automatic positioning method |
CN108010031B (en) * | 2017-12-15 | 2020-12-04 | 厦门美图之家科技有限公司 | Portrait segmentation method and mobile terminal |
CN108257135A (en) * | 2018-02-01 | 2018-07-06 | 浙江德尚韵兴图像科技有限公司 | The assistant diagnosis system of medical image features is understood based on deep learning method |
CN108717700B (en) * | 2018-04-09 | 2021-11-30 | 杭州依图医疗技术有限公司 | Method and device for detecting length of long diameter and short diameter of nodule |
CN108717554A (en) * | 2018-05-22 | 2018-10-30 | 复旦大学附属肿瘤医院 | A kind of thyroid tumors histopathologic slide image classification method and its device |
WO2019232346A1 (en) | 2018-05-31 | 2019-12-05 | Mayo Foundation For Medical Education And Research | Systems and media for automatically diagnosing thyroid nodules |
CN108962387A (en) * | 2018-06-14 | 2018-12-07 | 暨南大学附属第医院(广州华侨医院) | A kind of thyroid nodule Risk Forecast Method and system based on big data |
CN108846840B (en) * | 2018-06-26 | 2021-11-09 | 张茂 | Lung ultrasonic image analysis method and device, electronic equipment and readable storage medium |
US10993653B1 (en) | 2018-07-13 | 2021-05-04 | Johnson Thomas | Machine learning based non-invasive diagnosis of thyroid disease |
CN109087703B (en) * | 2018-08-24 | 2022-06-07 | 南京大学 | Peritoneal transfer marking method of abdominal cavity CT image based on deep convolutional neural network |
NL2021559B1 (en) * | 2018-09-04 | 2020-04-30 | Aidence B V | Determination of a growth rate of an object in 3D data sets using deep learning |
CN109360633B (en) * | 2018-09-04 | 2022-08-30 | 北京市商汤科技开发有限公司 | Medical image processing method and device, processing equipment and storage medium |
CN109493333A (en) * | 2018-11-08 | 2019-03-19 | 四川大学 | Ultrasonic Calcification in Thyroid Node point extraction algorithm based on convolutional neural networks |
CN109685143A (en) * | 2018-12-26 | 2019-04-26 | 上海市第十人民医院 | A kind of thyroid gland technetium sweeps the identification model construction method and device of image |
CN109829889A (en) * | 2018-12-27 | 2019-05-31 | 清影医疗科技(深圳)有限公司 | A kind of ultrasound image processing method and its system, equipment, storage medium |
CN109919187B (en) * | 2019-01-28 | 2021-02-12 | 浙江工商大学 | Method for classifying thyroid follicular picture by using bagging fine tuning CNN |
CN110021022A (en) * | 2019-02-21 | 2019-07-16 | 哈尔滨理工大学 | A kind of thyroid gland nuclear medical image diagnostic method based on deep learning |
CN109961838A (en) * | 2019-03-04 | 2019-07-02 | 浙江工业大学 | A kind of ultrasonic image chronic kidney disease auxiliary screening method based on deep learning |
CN110033456B (en) * | 2019-03-07 | 2021-07-09 | 腾讯科技(深圳)有限公司 | Medical image processing method, device, equipment and system |
CN111461158B (en) * | 2019-05-22 | 2021-04-13 | 什维新智医疗科技(上海)有限公司 | Method, apparatus, storage medium, and system for identifying features in ultrasound images |
CN110706209B (en) * | 2019-09-17 | 2022-04-29 | 东南大学 | Method for positioning tumor in brain magnetic resonance image of grid network |
JP2023505924A (en) * | 2019-09-19 | 2023-02-14 | ニー・アン・ポリテクニック | Automated system and method for monitoring anatomy |
CN111798455B (en) * | 2019-09-25 | 2023-07-04 | 天津大学 | Thyroid nodule real-time segmentation method based on full convolution dense cavity network |
CN110706793A (en) * | 2019-09-25 | 2020-01-17 | 天津大学 | Attention mechanism-based thyroid nodule semi-supervised segmentation method |
CN111091560A (en) * | 2019-12-19 | 2020-05-01 | 广州柏视医疗科技有限公司 | Nasopharyngeal carcinoma primary tumor image identification method and system |
CN111243042A (en) * | 2020-02-28 | 2020-06-05 | 浙江德尚韵兴医疗科技有限公司 | Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning |
CN111539930B (en) * | 2020-04-21 | 2022-06-21 | 浙江德尚韵兴医疗科技有限公司 | Dynamic ultrasonic breast nodule real-time segmentation and identification method based on deep learning |
CN111553919B (en) * | 2020-05-12 | 2022-12-30 | 上海深至信息科技有限公司 | Thyroid nodule analysis system based on elastic ultrasonic imaging |
CN112614118B (en) * | 2020-12-29 | 2022-06-21 | 浙江明峰智能医疗科技有限公司 | CT image prediction method based on deep learning and computer readable storage medium |
CN112862822B (en) * | 2021-04-06 | 2023-05-30 | 华侨大学 | Ultrasonic breast tumor detection and classification method, device and medium |
CN113421228A (en) * | 2021-06-03 | 2021-09-21 | 山东师范大学 | Thyroid nodule identification model training method and system based on parameter migration |
CN113689412A (en) * | 2021-08-27 | 2021-11-23 | 中国人民解放军总医院第六医学中心 | Thyroid image processing method and device, electronic equipment and storage medium |
CN113822386B (en) * | 2021-11-24 | 2022-02-22 | 苏州浪潮智能科技有限公司 | Image identification method, device, equipment and medium |
CN114708236B (en) * | 2022-04-11 | 2023-04-07 | 徐州医科大学 | Thyroid nodule benign and malignant classification method based on TSN and SSN in ultrasonic image |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102165454B (en) * | 2008-09-29 | 2015-08-05 | 皇家飞利浦电子股份有限公司 | For improving the method for computer-aided diagnosis to the probabilistic robustness of image procossing |
CN103745227A (en) * | 2013-12-31 | 2014-04-23 | 沈阳航空航天大学 | Method for identifying benign and malignant lung nodules based on multi-dimensional information |
CN104200224A (en) * | 2014-08-28 | 2014-12-10 | 西北工业大学 | Valueless image removing method based on deep convolutional neural networks |
CN104933672B (en) * | 2015-02-26 | 2018-05-29 | 浙江德尚韵兴图像科技有限公司 | Method based on quick convex optimized algorithm registration three dimensional CT with ultrasonic liver image |
CN104809443B (en) * | 2015-05-05 | 2018-12-28 | 上海交通大学 | Detection method of license plate and system based on convolutional neural networks |
CN104850836B (en) * | 2015-05-15 | 2018-04-10 | 浙江大学 | Insect automatic distinguishing method for image based on depth convolutional neural networks |
-
2016
- 2016-05-26 CN CN201610362069.8A patent/CN106056595B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN106056595A (en) | 2016-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106056595B (en) | Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules | |
CN108257135A (en) | The assistant diagnosis system of medical image features is understood based on deep learning method | |
CN107644420B (en) | Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system | |
CN112529894B (en) | Thyroid nodule diagnosis method based on deep learning network | |
CN111931811B (en) | Calculation method based on super-pixel image similarity | |
CN109389584A (en) | Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN | |
CN111553892B (en) | Lung nodule segmentation calculation method, device and system based on deep learning | |
Pan et al. | Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review | |
CN112263217B (en) | Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method | |
CN101551854B (en) | A processing system of unbalanced medical image and processing method thereof | |
CN112381164B (en) | Ultrasound image classification method and device based on multi-branch attention mechanism | |
JP2023544466A (en) | Training method and device for diagnostic model of lung adenocarcinoma and squamous cell carcinoma based on PET/CT | |
Yonekura et al. | Improving the generalization of disease stage classification with deep CNN for glioma histopathological images | |
CN114693933A (en) | Medical image segmentation device based on generation of confrontation network and multi-scale feature fusion | |
JP7427080B2 (en) | Weakly supervised multitask learning for cell detection and segmentation | |
Song et al. | Hybrid deep autoencoder with Curvature Gaussian for detection of various types of cells in bone marrow trephine biopsy images | |
CN112085113B (en) | Severe tumor image recognition system and method | |
CN115546605A (en) | Training method and device based on image labeling and segmentation model | |
Aslam et al. | Liver-tumor detection using CNN ResUNet | |
CN114332572B (en) | Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network | |
WO2021183765A1 (en) | Automated detection of tumors based on image processing | |
Solanki et al. | Brain tumour detection and classification by using deep learning classifier | |
Ahmad et al. | Brain tumor detection & features extraction from MR images using segmentation, image optimization & classification techniques | |
CN109214388B (en) | Tumor segmentation method and device based on personalized fusion network | |
Azli et al. | Ultrasound image segmentation using a combination of edge enhancement and kirsch’s template method for detecting follicles in ovaries |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 310012 Room 709, 710, 7-storey East Building, No. 90 Wensan Road, Xihu District, Hangzhou City, Zhejiang Province Applicant after: Zhejiang Deshang Yunxing Medical Technology Co., Ltd. Address before: Room 801/802, 8-storey East Science and Technology Building, Building 6, East Software Park, No. 90 Wensan Road, Hangzhou City, Zhejiang Province Applicant before: ZHEJIANG DESHANG YUNXING IMAGE SCIENCE & TECHNOLOGY CO., LTD. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |