CN108052977A - Breast molybdenum target picture depth study classification method based on lightweight neutral net - Google Patents
Breast molybdenum target picture depth study classification method based on lightweight neutral net Download PDFInfo
- Publication number
- CN108052977A CN108052977A CN201711343994.7A CN201711343994A CN108052977A CN 108052977 A CN108052977 A CN 108052977A CN 201711343994 A CN201711343994 A CN 201711343994A CN 108052977 A CN108052977 A CN 108052977A
- Authority
- CN
- China
- Prior art keywords
- image
- breast
- neutral net
- pixel
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The present invention relates to a kind of breast molybdenum target picture depth study classification methods based on lightweight neutral net.This method has used the image classification algorithms based on deep learning, realizes the breast density classification for breast molybdenum target image, and has used the deep learning frame based on lightweight neutral net.The method of the present invention significantly improves the adaptability on small-scale image data set, and then improves the accuracy and processing speed of breast density classification, can realize the automation breast density classification of breast molybdenum target image.
Description
Technical field
The invention belongs to biomedical sectors, and in particular to a kind of breast molybdenum target image based on lightweight neutral net is deep
Spend study classification method.
Background technology
Breast molybdenum target full name nipple correction inspection is also known as molybdenum palladium inspection, is the first choice of current diagnosis mammary gland disease
Most easy, most reliable non-invasive detection means, pain is relatively small, simple and easy to do, and high resolution, reproducible, stays
The image taken is for front and rear comparison, from age, the limitation of the bodily form, at present as conventional detection methods.Breast molybdenum target is made
For a kind of inspection method of relative noninvasive, can the gross anatomical structures of entire mammary gland be correctly relatively reflected comprehensively,
Such as influence to breast structure such as menstrual cycle, gestation, lactation of various physiologic factors is observed, and can dynamically be observed;Assist mirror
Do not go out the benign lesion and malignant tumour of mammary gland;Early detection suspicious lesions, regular follow-up take the photograph piece observation;For patient with breast cancer
It carries out the lesion situation after endocrine therapy, radiotherapy, chemotherapy and carries out follow-up examination, observe curative effect, and strong side mammary gland is determined
Phase monitors.
Breast molybdenum target is the most important woundless testing means of current breast cancer examination, Breast imaging reporting and data system
(BI-RADS) molybdenum target breast density is divided into level Four, as important diagnostic basis.However in medicine molybdenum target image pattern quantity
Less, the features such as difference is big, Density Distribution is uneven, for the application of breast molybdenum target image processing and analysis, manual identified mode
Mammary region border can only simply be divided, and breast density in region is qualitatively estimated, be difficult to meet
Breast density is classified requirement for accuracy and speed, and traditional breast molybdenum target image automatic density sorting technique there is also
Seriously affect analysis result the shortcomings that:Mammary gland in itself and comes in every shape, it is difficult to using traditional method pair based on appearance model
Various organization is split, and causes boundary demarcation between mammary gland and image background inaccurate;Various organization density point inside mammary gland
Cloth is extremely uneven so that Density Distribution block diagram statistical result is easily taken a part for the whole, and causes the statistical analysis of mammary gland global density
Generation mistake has seriously affected the discrimination precision and processing speed of breast molybdenum target density classification.
The content of the invention
It is an object of the invention to provide a kind of breast molybdenum target picture depth learning classifications based on lightweight neutral net
Method after breast molybdenum target image caused by mammary gland conventional detection, is based on digitized breast molybdenum target image
The processing and analysis of deep learning, so as to which breast molybdenum target image automatically is carried out density classification, to mitigate image doctor work
Amount improves diagnosis of breast disease rate.
To achieve the above object, the technical scheme is that:A kind of breast molybdenum target figure based on lightweight neutral net
As deep learning sorting technique, include the following steps,
(I) pixel grey scale gradient weight calculating is carried out to all original images in the breast molybdenum target data set of known density classification,
Obtain corresponding gradient weight figure;
(II) erosion and expansive working of closed area are carried out to the gradient weight figure, removes the Human disturbance object in image,
Obtain the foreground region image only comprising breast and chest muscle;
(III) the foreground area figure corresponding to all original images in the breast molybdenum target data set of known density classification
As blending, the training set of images only comprising chest muscle and mammary gland is obtained;
(IV) the deep learning frame based on lightweight neutral net is built, altogether comprising 12 layers, includes one successively in sequence
Input layer, one containing convolution kernel and using correct the convolutional layer of linear unit activating function, one containing convolution kernel and using maximum
The pond layer of sampling function, one containing convolution kernel and using the convolutional layer of ReLU activation primitives, one containing convolution kernel and using
The pond layer of Maxpooling, one containing convolution kernel and using the convolutional layer of ReLU activation primitives, one containing convolution kernel and using
The data that the pond layer of Maxpooling, a data complanation layer, one 64 full articulamentums, a discarding ratio are 0.5
Layer, one 4 full articulamentums and an active coating using normalization index activation primitive are abandoned as output layer;
(V) input sample quantity is increased to deep learning frame by sample extension to the training set image, by nerve net
Network calculates classification results and automatically compared with true classification, and errors are fed back in neutral net to each convolution nuclear parameter
It is modified, training set image recalculates classification results and compared with true classification by revised network, error
It feeds back to neutral net to be modified, so Xun Huan 200 times, completes training process;
(VI) pixel grey scale gradient weight calculating is carried out to non-classified image, obtains corresponding gradient weight figure;
(VII) erosion and expansive working of closed area are carried out to the gradient weight figure of the unfiled image, removed in image
Human disturbance object, obtain the only foreground region image comprising breast and chest muscle;
(VIII) foreground region image corresponding to the unfiled original image blends, and obtains only including muscles of thorax
The test image of meat and mammary gland;
(IX) test image is inputted to the neutral net for completing training, classification results is calculated by neutral net automatically,
Complete test process.
In an embodiment of the present invention, the step (I) the specific implementation process is as follows:
A) from top to bottom, each pixel of traversing graph picture from left to right, calculate each pixel and phase horizontally and vertically
Difference between adjacent pixel, and by obtain two difference values, obtain containing the ladder of both horizontally and vertically change information
Degree;
B) gradient weights of single pixel are the inverse of its gradient, and the gradient weights of all pixels constitute and original image size
Consistent gradient weights image.
In an embodiment of the present invention, the step (II) the specific implementation process is as follows:
A) operation is eroded to gradient weight figure, the diamond shape using size as 5 pixels is structural element object, and image is closed
The edge in region erodes operation, removes the linear object that width in image is less than 10 pixels, will include breast and chest
The foreground area of muscle is separated with Human disturbance object;
B) it is less than the gradient weights image after the linear object of 10 pixels to removal width and carries out expansive working, using size as 5
The diamond shape of a pixel is structural element object, carries out expansive working to the edge of image closed area, recovers main body knot in image
Original border of structure;
C) since breast and chest muscle region are molybdenum target image subject structures, the knot of area maximum in gradient weights image is retained
Structure, the as only foreground area comprising breast and chest muscle, border between the border, that is, foreground and background in the region.
In an embodiment of the present invention, the step (III) the specific implementation process is as follows:
A) Mask of binaryzation is transformed to the digit of corresponding original image;
B) foreground region image of all original images in breast molybdenum target data set and its one-to-one same size is carried out
Matrix dot product operates, and the matrix after dot product operation is foreground image;
C) repeat dot product operation to all images in database, obtain the image training only comprising chest muscle and mammary gland
Collection.
In an embodiment of the present invention, the step (IV) the specific implementation process is as follows:
A) input layer of 200 × 200 pixel sizes is added in;
B) a convolutional layer CNN containing 32 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
C) a pond layer containing 32 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
D) a convolutional layer CNN containing 32 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
E) a pond layer containing 32 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
F) a convolutional layer CNN containing 64 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
G) a pond layer containing 64 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
H) a data complanation layer is added in;
I) 64 full articulamentums are added in;
J) add in one and abandon the data discarding layer that ratio is 0.5;
K) 4 full articulamentums are added in;
L) active coating using Softmax activation primitives is added in as output layer;
M) the deep learning frame of lightweight neutral net of the structure comprising above-mentioned 12 layers of all kinds of levels.
In an embodiment of the present invention, the step (V) the specific implementation process is as follows:
A) 1 sample will be randomly choosed in training set image to input to deep learning frame, which is carried out at random to include rotation
Transformation change, width scale transformation, length scale conversion, cut out conversion including stochastic transformation, generate 32 corresponding samples;
B) 32 samples for generating gained at random are inputted to deep learning frame, and classification results are calculated automatically simultaneously by neutral net
Compared with true classification, corresponding accuracy and information loss value are obtained, preserves accuracy, information loss value and error 3
A parameter;
C) errors are fed back in neutral net and each convolution nuclear parameter in neutral net is modified;
D) sample for randomly choosing 1 sample input progress stochastic transformation in training set image again is extended, gained is random
32 samples of generation recalculate classification results and compared with true classification by revised network, and error is fed back to
Neutral net is modified, so Xun Huan 200 times, completes training process;
E) the obtained accuracy of each training process and penalty values are drawn into accuracy curve and loss curve respectively and preserved.
In an embodiment of the present invention, the step (VI) the specific implementation process is as follows:
A) from top to bottom, each pixel of traversing graph picture from left to right, calculate each pixel and phase horizontally and vertically
Difference between adjacent pixel, and by obtain two difference values, obtain including the gradient of both horizontally and vertically change information;
B) gradient weights of single pixel are the inverse of its gradient, and the gradient weights of all pixels constitute and original image size
Consistent gradient weights image.
In an embodiment of the present invention, the step (VII) the specific implementation process is as follows:
A) operation is eroded to gradient weights image, the diamond shape using size as 5 pixels is structural element object, and image is sealed
The edge of closed region erodes operation, removes the linear object that width in image is less than 10 pixels, will include breast and chest
The foreground area of portion's muscle is separated with most of manually chaff interferent;
B) it is less than the gradient weights image after the linear object of 10 pixels to removal width and carries out expansive working, using size as 5
The diamond shape of a pixel is structural element object, carries out expansive working to the edge of image closed area, recovers main body knot in image
Original border of structure;
C) since breast and chest muscle region are molybdenum target image subject structures, the knot of area maximum in gradient weights image is retained
Structure, the as only foreground area comprising breast and chest muscle, border between the border, that is, foreground and background in the region.
In an embodiment of the present invention, the step (VIII) the specific implementation process is as follows:
A) foreground region image of binaryzation is transformed to the digit of corresponding original image;
B) Mask of all original images in breast molybdenum target data set and its one-to-one same size is subjected to matrix dot product
Operation, the matrix after dot product operation is foreground image;
C) repeat dot product operation to all images in database, obtain the image training only comprising chest muscle and mammary gland
Collection.
In an embodiment of the present invention, the step (IX) the specific implementation process is as follows:
A) test image is inputted to the neutral net for completing training, classification results is calculated by neutral net automatically, it is complete
Into test process;
B) classification results calculated automatically with expert classification result are compared, calculate classification accuracy rate and recorded.
Compared to the prior art, the invention has the advantages that:
It (1), will using the breast molybdenum target picture depth study classification method based on lightweight neutral net in method of the invention
Image classification problem is converted to Machine Learning Problems, realizes the automated intelligent classification for breast molybdenum target image, speed is fast, efficiency
Height, and can ensure the accuracy of breast density classification;
(2) method of the invention by splitting pretreatment to the foreground image comprising mammary gland and chest muscle, effectively eliminates
Influence of the Human disturbance object to classification results, improves classification accuracy;
(3) method of the invention by carrying out stochastic transformation to limited original image to sample extension, has expanded training set sample
This quantity improves the efficiency of neural metwork training and classification accuracy;
(4) method of the invention by handling simplifying for neutral net in conventional depth learning framework, preferably includes 3 layers
The structure of CNN neutral nets reduces the complexity of neural network structure, improves the efficiency of neural metwork training and classification essence
Exactness;
(5) it can realize that online breast molybdenum target density classification in real time detects using the method for the present invention.
Description of the drawings
Fig. 1 of the present invention is the step schematic diagram of the present invention.
Fig. 2 is the segmentation schematic diagram of the foreground image comprising mammary gland and chest muscle of the present invention, a) is prosthetic chaff interferent
Original breast molybdenum target image, b) for a) corresponding gradient weights figure, c) for a) corresponding initial foreground area figure, wherein red
Boundary line of the lines between prospect and background, d) be to have the original breast molybdenum target image of artificial chaff interferent, e) to be d) corresponding
Gradient weights figure, f) for d) corresponding foreground area figure, wherein boundary line of the red lines between prospect and background.
Fig. 3 is the breast molybdenum target picture depth learning classification block schematic illustration based on lightweight neutral net of the present invention,
It is made of different type level, wherein input layer digital representation input picture normalizes size as 200 × 200 pixels, first layer
Convolutional layer CNN digital representation convolution kernels size is 3 × 3 pixels, and convolution nuclear volume is 64, first layer pond layer Maxpooling
Digital representation convolution kernel size is 2 × 2 pixels, and convolution nuclear volume is 32, and second layer convolutional layer CNN digital representation convolution kernels are big
Small is 3 × 3 pixels, and convolution nuclear volume is 64, and second layer pond layer Maxpooling digital representation convolution kernels size is 2 × 2
Pixel, convolution nuclear volume be 32, third layer convolutional layer CNN digital representation convolution kernels size be 3 × 3 pixels, convolution nuclear volume
For 64, third layer pond layer Maxpooling digital representation convolution kernels size is 2 × 2 pixels, and convolution nuclear volume is 64, the
One its one-dimensional parameter of layer data complanation layer Flatten digital representations is 33856, the full articulamentum Dense number tables of first layer
Show its one-dimensional parameter for 64, the first layer data abandons its one-dimensional parameter of layer Dropout digital representations as 64, and the second layer connects entirely
It connects layer digital and represents its one-dimensional parameter as 4, which is also output layer.
Fig. 4 is to whether there is foreground segmentation pretreatment in the embodiment of the present invention on breast molybdenum target image analysis data storehouse (MIAS)
Classification results contrast schematic diagram.
Fig. 5 is the classification for having no specimen to extend in the embodiment of the present invention on breast molybdenum target image analysis data storehouse (MIAS)
Comparative result schematic diagram.
Fig. 6 is refreshing using different levels structure on breast molybdenum target image analysis data storehouse (MIAS) in the embodiment of the present invention
Deep learning frame classification results contrast schematic diagram through network.
Specific embodiment
Inventor is by extensive and in-depth study, a kind of breast molybdenum target based on lightweight neutral net of acquisition
Picture depth study classification method, the method have used the image classification algorithms based on deep learning so that realize for breast
The breast density classification of gland molybdenum target image, and due to having used the deep learning frame based on lightweight neutral net so that
Adaptability on small-scale image data set is significantly improved using method of the present invention, and then improves breast density
The accuracy and processing speed of classification can realize the automation breast density classification of breast molybdenum target image.
Before describing the present invention, it should be understood that the invention is not restricted to the specific method and experiment condition, because this
Class method and condition can change.It should also be understood that its purpose of the term as used herein is only that description specific embodiment, and
And it is not intended to be restricted, the scope of the present invention will be limited only by the claims which follow.
The present invention relates to a kind of breast molybdenum target picture depth study classification methods based on lightweight neutral net, utilize ash
The carry out foreground image Fast Segmentation of gradient weights image is spent, obtains pretreated breast molybdenum target image as deep learning frame
The input data set of frame carries out the training of neural network parameter using the molybdenum target image for inputting training set, after training is completed
Automatic density classification is carried out to the test set of unknown classification, the work that image doctor measures for breast density can be mitigated significantly
Amount, and then contribute to the clinical diagnosis of mammary gland disease.
The core concept of the present invention is that deep learning thought is introduced into the automated intelligent classification of breast molybdenum target image, is passed through
Molybdenum target image pattern is divided into four classes by the neural network ensemble of lightweight according to its Density Distribution, including meeting breast image report
With the 1-4 grade breast densities of data system (BI-RADS) standard, substantial amounts of manpower is saved for image doctor, and to related mammary gland
The clinical diagnosis that disease includes breast cancer provides reference.First, gone using the breast molybdenum target dividing method based on gradient weights figure
Except the Human disturbance object in image, obtain the foreground region image only comprising breast and chest muscle, then by pretreatment after
Image measurement set pair structure the deep learning frame based on lightweight neutral net be trained, last test collection image input
To the neutral net for completing training, classification results are calculated by neutral net automatically, obtain the final densities point of breast molybdenum target image
Class.
Breast density classification is the difficult point in breast molybdenum target image analysis technology, and it is close that this method significantly improves breast molybdenum target
The discrimination precision and processing speed of classification are spent, can be applied to diagnosis of breast disease and examination, is relevant clinical application and scientific research
Effective reliable analysis tool is provided, there is wide apparent economic and social benefit.
The technical solution adopted by the present invention to solve the technical problems mainly comprises the steps of:
1st, the breast molybdenum target data set of known density classification is trained, all original images is subjected to shade of gray weight meter
The pretreatment of calculation obtains the foreground region image only comprising breast and chest muscle as training set, while builds lightweight depth
Learning framework is spent, neutral net is trained using training set image as input, is completed after reaching 200 repetitive exercise processes
Training, specific implementation 5 steps of training process point are as follows:
1.1st, all original images in breast molybdenum target data set are calculated into pixel grey scale gradient weight, obtains corresponding gradient weight
Figure;
The 1.2nd, gradient weights image is carried out to the erosion and expansive working of closed area, remove the Human disturbance object in image, obtain
To the foreground region image for only including breast and chest muscle;
1.3rd, the corresponding foreground area of all original images in breast molybdenum target data set is blended, obtains only including chest
The image data set of muscle and mammary gland;
1.4th, the deep learning frame based on lightweight neutral net is built, altogether comprising 12 layers, includes one successively in sequence
The input layer of 200 × 200 pixel sizes, one contain 32 3 × 3 pixel size convolution kernels(Core)And using amendment linear unit
(ReLU)The convolutional layer of activation primitive(Convolutional Neural Network, CNN), one contain 32 2 × 2 pixels
Size convolution kernel is simultaneously sampled using maximum(Maxpooling)The pond layer of function(Pooling), one contain 32 3 × 3 pixels
Size convolution kernel simultaneously containing 32 2 × 2 pixel size convolution kernels and is used using the convolutional layer of ReLU activation primitives, one
The pond layer of Maxpooling, the convolutional layer containing 64 3 × 3 pixel size convolution kernels and use ReLU activation primitives, one
It is a to contain 64 2 × 2 pixel size convolution kernels and using pond layer, a data complanation layer of Maxpooling(Flatten)、
One 64 full articulamentum(Dense), one abandon the data that ratio is 0.5 and abandon layer(Dropout), one 4 full connections
Layer and a use normalization index(Softmax)The active coating of activation primitive(Activation)As output layer;
1.5th, input sample quantity is increased to deep learning frame by sample extension to the training set image, by nerve net
Network calculates classification results and automatically compared with true classification, and errors are fed back in neutral net to each convolution nuclear parameter
It is modified, training set image recalculates classification results and compared with true classification by revised network, error
It feeds back to neutral net to be modified, so Xun Huan 200 times, completes training process.
2nd, class test is carried out to the breast molybdenum target image of unknown density classification, original image is subjected to shade of gray weight
The pretreatment of calculating obtains the foreground region image only comprising breast and chest muscle as test image, is then input to light
Magnitude deep learning frame calculates classification results automatically by neutral net, completes test process, specific implementation training process point 4
Step is as follows:
2.1st, pixel grey scale gradient weight calculating is carried out to non-classified image, obtains corresponding gradient weight figure;
2.2nd, the erosion and expansive working of closed area are carried out to the gradient weights figure of the unfiled image, removed in image
Human disturbance object obtains the foreground region image only comprising breast and chest muscle(Mask);
2.3rd, the Mask corresponding to the unfiled original image is blended, and is obtained only comprising chest muscle and mammary gland
Test image;
2.4th, the test image is inputted to the neutral net for completing training, classification results is calculated by neutral net automatically,
Complete test process.
In a preferred embodiment according to the present invention, the mammary gland according to the present invention based on lightweight neutral net
Molybdenum target picture depth study classification method mainly comprises the steps of:
(1) pixel grey scale gradient weight calculating is carried out to all original images in the breast molybdenum target data set of known density classification,
Obtain corresponding gradient weight figure.
(2) erosion and expansive working of closed area are carried out using the gradient weights image described in step (1), removes image
In Human disturbance object, obtain the only foreground region image comprising breast and chest muscle(Mask).
(3) it is right with it using all original images in the breast molybdenum target data set of the known density classification described in step (1)
Mask described in the step of answering (2) is blended, and obtains the training set of images only comprising chest muscle and mammary gland.
(4) the deep learning frame based on lightweight neutral net is built, altogether comprising 12 layers, includes one successively in sequence
The input layer of a 200 × 200 pixel size, one contain 32 3 × 3 pixel size convolution kernels(Core)It is and linear single using correcting
Member(ReLU)The convolutional layer of activation primitive(Convolutional Neural Network, CNN), one contain 32 2 × 2 pictures
Plain size convolution kernel is simultaneously sampled using maximum(Maxpooling)The pond layer of function(Pooling), one contain 32 3 × 3 pictures
Plain size convolution kernel simultaneously containing 32 2 × 2 pixel size convolution kernels and is used using the convolutional layer of ReLU activation primitives, one
The pond layer of Maxpooling, the convolutional layer containing 64 3 × 3 pixel size convolution kernels and use ReLU activation primitives, one
It is a to contain 64 2 × 2 pixel size convolution kernels and using pond layer, a data complanation layer of Maxpooling(Flatten)、
One 64 full articulamentum(Dense), one abandon the data that ratio is 0.5 and abandon layer(Dropout), one 4 full connections
Layer and a use normalization index(Softmax)The active coating of activation primitive(Activation)As output layer.
(5) input sample quantity is increased to deep learning by sample extension using the training set image described in step (3)
Frame is calculated classification results by neutral net and compared with true classification automatically, and errors are fed back in neutral net
Each convolution nuclear parameter is modified, training set image recalculates classification results by revised network and classifies with true
It is compared, error feeds back to neutral net and is modified, so Xun Huan 200 times, completes training process.
(6) pixel grey scale gradient weight calculating is carried out to non-classified image, obtains corresponding gradient weight figure.
(7) erosion and expansive working of closed area are carried out using the gradient weights figure described in step (6), is removed in image
Human disturbance object, obtain the only foreground region image comprising breast and chest muscle(Mask).
(8) the non-classified breast molybdenum target original image described in step (6) and its corresponding Mask step (7) Suo Shu are utilized
It blends, obtains the test image only comprising chest muscle and mammary gland.
(9) inputted using the test image described in step (8) to the neutral net of the completion training described in step (5), by
Neutral net calculates classification results automatically, completes test process.
In the preferred embodiment of the present invention, the breast molybdenum target image based on lightweight neutral net
Deep learning sorting technique, it is characterised in that:The breast molybdenum target image carries out pixel grey scale gradient weight computational methods such as
Under:
A) from top to bottom, each pixel of traversing graph picture from left to right, calculate each pixel and phase horizontally and vertically
Difference between adjacent pixel, and by obtain two difference values, obtain containing the ladder of both horizontally and vertically change information
Degree;
B) gradient weights of single pixel are the inverse of its gradient, and the gradient weights of all pixels constitute and original image size
Consistent gradient weights image.
In the preferred embodiment of the present invention, the breast molybdenum target image based on lightweight neutral net
Deep learning sorting technique, it is characterised in that:The foreground region image only comprising breast and chest muscle(Mask)Point
Segmentation method is as follows:
A) operation is eroded to gradient weights image, the diamond shape using size as 5 pixels is structural element object, and image is sealed
The edge of closed region erodes operation, removes the linear object that width in image is less than 10 pixels, will include breast and chest
The foreground area of portion's muscle is separated with most of manually chaff interferent;
B) it is less than the gradient weights image after the linear object of 10 pixels to removal width and carries out expansive working, using size as 5
The diamond shape of a pixel is structural element object, carries out expansive working to the edge of image closed area, recovers main body knot in image
Original border of structure;
C) since breast and chest muscle region are molybdenum target image subject structures, the knot of area maximum in gradient weights image is retained
Structure, the only as foreground area Mask comprising breast and chest muscle, border between the border, that is, foreground and background in the region.
In the preferred embodiment of the present invention, the breast molybdenum target image based on lightweight neutral net
Deep learning sorting technique, it is characterised in that:The training set of images method for building up only comprising chest muscle and mammary gland is such as
Under:
A) Mask of binaryzation is transformed to the digit of corresponding original image;
B) Mask of all original images in breast molybdenum target data set and its one-to-one same size is subjected to matrix dot product
Operation, the matrix after dot product operation is foreground image;
C) repeat dot product operation to all images in database, obtain the image training only comprising chest muscle and mammary gland
Collection.
In the preferred embodiment of the present invention, the breast molybdenum target image based on lightweight neutral net
Deep learning sorting technique, it is characterised in that:The lightweight neutral net deep learning frame method for building up is as follows:
A) input layer of 200 × 200 pixel sizes is added in;
B) a convolutional layer CNN containing 32 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
C) a pond layer containing 32 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
D) a convolutional layer CNN containing 32 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
E) a pond layer containing 32 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
F) a convolutional layer CNN containing 64 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
G) a pond layer containing 64 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
H) a data complanation layer is added in(Flatten);
I) 64 full articulamentums are added in(Dense);
J) add in one and abandon the data discarding layer that ratio is 0.5(Dropout);
K) 4 full articulamentums are added in;
L) an active coating using Softmax activation primitives is added in(Activation)As output layer;
M) the deep learning frame of lightweight neutral net of the structure comprising above-mentioned 12 layers of all kinds of levels.
In the preferred embodiment of the present invention, the breast molybdenum target image based on lightweight neutral net
Deep learning sorting technique, it is characterised in that:The sample extension of the input picture and neural network training method are as follows:
A) 1 sample will be randomly choosed in training set image to input to deep learning frame, which is carried out at random to include rotation
Transformation is changed(Angular range ± 20 degree), width scale transformation(Zoom ranges ± 0.2 × width), length scale conversion(Scale model
Enclose ± 0.2 × length), cut out conversion(Cut out scope ± 0.2 × area)Stochastic transformation inside generates 32 corresponding samples
This;
B) 32 samples for generating gained at random are inputted to deep learning frame, and classification results are calculated automatically simultaneously by neutral net
Compared with true classification, corresponding accuracy and information loss value are obtained, preserves accuracy, information loss value and error 3
A parameter;
C) errors are fed back in neutral net and each convolution nuclear parameter in neutral net is modified;
D) sample for randomly choosing 1 sample input progress stochastic transformation in training set image again is extended, gained is random
32 samples of generation recalculate classification results and compared with true classification by revised network, and error is fed back to
Neutral net is modified, so Xun Huan 200 times, completes training process;
E) the obtained accuracy of each training process and penalty values are drawn into accuracy curve and loss curve respectively and preserved.
In the preferred embodiment of the present invention, the breast molybdenum target image based on lightweight neutral net
Deep learning sorting technique, it is characterised in that:The breast molybdenum target image classification test method of the unknown classification is as follows:
A) pixel grey scale gradient weight calculating is carried out to non-classified image, obtains corresponding gradient weight figure;
B) the gradient weights figure of unfiled image is carried out to the erosion and expansive working of closed area, removed manually dry in image
Object is disturbed, obtains the foreground region image only comprising breast and chest muscle(Mask);
C) the corresponding Mask of unfiled original image is blended, obtains the test chart only comprising chest muscle and mammary gland
Picture;
D) test image is inputted to the neutral net for completing training, classification results is calculated by neutral net automatically, complete test
Process;
E) classification results calculated automatically with expert classification result are compared, calculate classification accuracy rate and recorded.
The invention will be further described with reference to the accompanying drawings and examples.
1 breast molybdenum target graphical analysis association of embodiment(MIAS)The classification of data images
Nipple correction inspection is carried out to mammary gland, also known as molybdenum palladium inspection obtains digitized breast molybdenum target image.
Using the breast molybdenum target picture depth study classification method based on lightweight neutral net of the present invention, to acquisition
Known classification is trained in breast molybdenum target image, to the carry out test analysis of unknown classification, as shown in Figure 1, mainly including
The following steps:
1st, the breast molybdenum target data set of known density classification is trained, all original images is subjected to shade of gray weight meter
The pretreatment of calculation obtains the foreground region image only comprising breast and chest muscle as training set, while builds lightweight depth
Learning framework is spent, is extended using training set image by sample as input and neutral net is trained, reaches 200 iteration instructions
Training is completed after practicing process, specific implementation five steps of training process point are as follows(Referring to attached drawing 1):
All original images in breast molybdenum target data set are calculated pixel grey scale gradient weight by 1.1, obtain corresponding gradient weight
Figure;
1.2 carry out gradient weights image the erosion and expansive working of closed area, remove the Human disturbance object in image, obtain
Foreground region image only comprising breast and chest muscle;
1.3 blend the corresponding foreground area of all original images in breast molybdenum target data set, obtain only including chest
The image data set of muscle and mammary gland;
1.4 deep learning frames of the structure based on lightweight neutral net, altogether comprising 12 layers of all types of levels;
Training set image is increased input sample quantity to deep learning frame by 1.5 by sample extension, automatic by neutral net
Calculate classification results and compared with true classification, errors feed back in neutral net and each convolution nuclear parameter is repaiied
Just, training set image recalculates classification results and compared with true classification by revised network, and error is fed back to
Neutral net is modified, so Xun Huan 200 times, completes training process.
2nd, class test is carried out to the breast molybdenum target image of unknown density classification, original image is subjected to shade of gray weight
The pretreatment of calculating obtains the foreground region image only comprising breast and chest muscle as test image, is then input to light
Magnitude deep learning frame calculates classification results automatically by neutral net, completes test process, specific implementation training process point four
Step is as follows(Referring to attached drawing 1):
2.1st, pixel grey scale gradient weight calculating is carried out to non-classified image, obtains corresponding gradient weight figure;
The 2.2nd, the gradient weights figure of unfiled image is carried out to the erosion and expansive working of closed area, removed artificial in image
Chaff interferent obtains the foreground region image only comprising breast and chest muscle(Mask);
2.3rd, the corresponding Mask of unfiled original image is blended, obtains the test chart only comprising chest muscle and mammary gland
Picture;
2.4th, test image is inputted to the neutral net for completing training, classification results is calculated by neutral net automatically, complete to survey
Examination process.
3rd, to by pretreatment denoising shade of gray weights image corresponding with enhanced breast molybdenum target image calculating, tool
Body realizes that two steps of process point are as follows(Referring to attached drawing 2):
3.1 from top to bottom, each pixel of traversing graph picture from left to right, calculates each pixel and phase horizontally and vertically
Difference between adjacent pixel, and by obtain two difference values, obtain containing the ladder of both horizontally and vertically change information
Degree;
The gradient weights of 3.2 single pixels are the inverse of its gradient, and the gradient weights of all pixels constitute big with original image
Small consistent gradient weights image(As shown in attached drawing 2b, 2e).
4th, the erosion and expansive working of closed area, check image top breast and adhesion people are carried out to gradient weights image
The inflection point on border between work chaff interferent removes the Human disturbance object in image, obtains the prospect only comprising breast and chest muscle
Border between region and image background, specific implementation three steps of process point are as follows(Referring to attached drawing 2):
4.1 pairs of gradient weights images erode operation, and the diamond shape using size as 5 pixels is structural element object, to image
The edge of closed area erodes operation, removes the linear object that width in image is less than 10 pixels, will include breast with
The foreground area of chest muscle is separated with most of manually chaff interferent;
4.2 pairs of removal width are less than the gradient weights image progress expansive working after the linear object of 10 pixels, using size as 5
The diamond shape of a pixel is structural element object, carries out expansive working to the edge of image closed area, recovers main body knot in image
Original border of structure;
4.3 since breast and chest muscle region are molybdenum target image subject structures, retain area maximum in gradient weights image
Structure(As shown in attached drawing 2c, 2f), the as only foreground area comprising breast and chest muscle, before grey contour line represents in figure
Border between scape and background.
5th, all kinds of levels are sequentially added, structure includes the lightweight neutral net deep learning frame of 12 layers of all kinds of levels altogether
Frame, specific implementation 13 steps of process point are as follows(Referring to attached drawing 3):
5.1 add in the input layer of 200 × 200 pixel sizes;
5.2 add in a convolutional layer CNN containing 32 3 × 3 pixel size convolution kernels and using ReLU activation primitives;
5.3 add in a pond layer containing 32 2 × 2 pixel size convolution kernels and using Maxpooling;
5.4 add in a convolutional layer CNN containing 32 3 × 3 pixel size convolution kernels and using ReLU activation primitives;
5.5 add in a pond layer containing 32 2 × 2 pixel size convolution kernels and using Maxpooling;
5.6 add in a convolutional layer CNN containing 64 3 × 3 pixel size convolution kernels and using ReLU activation primitives;
5.7 add in a pond layer containing 64 2 × 2 pixel size convolution kernels and using Maxpooling;
5.8 add in a data complanation layer(Flatten);
5.9 add in 64 full articulamentums(Dense);
5.10, which add in one, abandons the data discarding layer that ratio is 0.5(Dropout);
5.11 add in 4 full articulamentums;
5.12 add in an active coating using Softmax activation primitives(Activation)As output layer;
The deep learning frame of 5.13 lightweight neutral nets of the structure comprising above-mentioned 12 layers of all kinds of levels, complete frame is by scheming
Shown in 3.
Supplement result 1:As shown in Figure 4, have in the present embodiment on breast molybdenum target image analysis data storehouse (MIAS) unmatched
The classification results contrast schematic diagram of scape segmentation pretreatment, wherein changing more smooth curve in every width subgraph as training process institute
The training curve of generation, the more violent curve of variation are test curve caused by test process, a) are using undivided
The accuracy curve that original image is classified, b) it is the penalty values curve classified using undivided original image, c)
To use the accuracy curve that the foreground image comprising mammary gland and chest muscle region after splitting is classified, d) it is to use to divide
The penalty values curve that the foreground image cut is classified.
Supplement result 2:As shown in Figure 5, in the present embodiment sample is whether there is on breast molybdenum target image analysis data storehouse (MIAS)
The classification results contrast schematic diagram of this extension, wherein changing more smooth curve in every width subgraph caused by training process
Training curve, the more violent curve of variation are test curve caused by test process, a) for not using the original of sample extension
The accuracy curve that beginning image is classified, b) not use the penalty values curve that the original image of sample extension is classified,
C) it is the accuracy curve classified using the random generation image set of sample extension, d) for using the random life of sample extension
The penalty values curve classified into image set.
Supplement result 3:As shown in Figure 6, used not on breast molybdenum target image analysis data storehouse (MIAS) in the present embodiment
The deep learning frame classification results contrast schematic diagram of same level artificial neural, wherein changing in every width subgraph more smooth
Curve for training curve caused by training process, the test caused by test process of the more violent curve of variation is bent
Line is a) the accuracy curve classified using 2 layers of CNN neutral nets, b) it is to be classified using 2 layers of CNN neutral nets
Penalty values curve, c) for the accuracy curve classified using 3 layers of CNN neutral nets, d) be using 3 layers of CNN nerve nets
The penalty values curve that network is classified, d) for the accuracy curve classified using 16 layers of CNN neutral nets, e) it is using 16
The penalty values curve that layer CNN neutral nets are classified.
Implement illustration on breast molybdenum target image analysis data storehouse (MIAS) database as it can be seen that before using segmentation by above-mentioned
The deep learning frame of scape image step, sample spread step and 3 layers of CNN neural network structures has on classification accuracy rate
Clear superiority, after 200 training and corresponding test, obtained final classification accuracy is about 84.8%.
All references mentioned in the present invention is incorporated herein by reference, independent just as each document
It is incorporated as with reference to such.In addition, it should also be understood that, after reading the above teachings of the present invention, those skilled in the art can
To be made various changes or modifications to the present invention, such equivalent forms equally fall within the model that the application the appended claims are limited
It encloses.
The above are preferred embodiments of the present invention, all any changes made according to the technical solution of the present invention, and generated function is made
During with scope without departing from technical solution of the present invention, all belong to the scope of protection of the present invention.
Claims (10)
1. a kind of breast molybdenum target picture depth study classification method based on lightweight neutral net, it is characterised in that:Including such as
Lower step,
(I) pixel grey scale gradient weight calculating is carried out to all original images in the breast molybdenum target data set of known density classification,
Obtain corresponding gradient weight figure;
(II) erosion and expansive working of closed area are carried out to the gradient weight figure, removes the Human disturbance object in image,
Obtain the foreground region image only comprising breast and chest muscle;
(III) the foreground area figure corresponding to all original images in the breast molybdenum target data set of known density classification
As blending, the training set of images only comprising chest muscle and mammary gland is obtained;
(IV) the deep learning frame based on lightweight neutral net is built, altogether comprising 12 layers, includes one successively in sequence
Input layer, one containing convolution kernel and using correct the convolutional layer of linear unit activating function, one containing convolution kernel and using maximum
The pond layer of sampling function, one containing convolution kernel and using the convolutional layer of ReLU activation primitives, one containing convolution kernel and using
The pond layer of Maxpooling, one containing convolution kernel and using the convolutional layer of ReLU activation primitives, one containing convolution kernel and using
The data that the pond layer of Maxpooling, a data complanation layer, one 64 full articulamentums, a discarding ratio are 0.5
Layer, one 4 full articulamentums and an active coating using normalization index activation primitive are abandoned as output layer;
(V) input sample quantity is increased to deep learning frame by sample extension to the training set image, by nerve net
Network calculates classification results and automatically compared with true classification, and errors are fed back in neutral net to each convolution nuclear parameter
It is modified, training set image recalculates classification results and compared with true classification by revised network, error
It feeds back to neutral net to be modified, so Xun Huan 200 times, completes training process;
(VI) pixel grey scale gradient weight calculating is carried out to non-classified image, obtains corresponding gradient weight figure;
(VII) erosion and expansive working of closed area are carried out to the gradient weight figure of the unfiled image, removed in image
Human disturbance object, obtain the only foreground region image comprising breast and chest muscle;
(VIII) foreground region image corresponding to the unfiled original image blends, and obtains only including muscles of thorax
The test image of meat and mammary gland;
(IX) test image is inputted to the neutral net for completing training, classification results is calculated by neutral net automatically,
Complete test process.
2. according to the method described in claim 1, it is characterized in that:The step (I) the specific implementation process is as follows:
A) from top to bottom, each pixel of traversing graph picture from left to right, calculate each pixel and phase horizontally and vertically
Difference between adjacent pixel, and by obtain two difference values, obtain containing the ladder of both horizontally and vertically change information
Degree;
B) gradient weights of single pixel are the inverse of its gradient, and the gradient weights of all pixels constitute and original image size
Consistent gradient weights image.
3. according to the method described in claim 1, it is characterized in that:The step (II) the specific implementation process is as follows:
A) operation is eroded to gradient weight figure, the diamond shape using size as 5 pixels is structural element object, and image is closed
The edge in region erodes operation, removes the linear object that width in image is less than 10 pixels, will include breast and chest
The foreground area of muscle is separated with Human disturbance object;
B) it is less than the gradient weights image after the linear object of 10 pixels to removal width and carries out expansive working, using size as 5
The diamond shape of a pixel is structural element object, carries out expansive working to the edge of image closed area, recovers main body knot in image
Original border of structure;
C) since breast and chest muscle region are molybdenum target image subject structures, the knot of area maximum in gradient weights image is retained
Structure, the as only foreground area comprising breast and chest muscle, border between the border, that is, foreground and background in the region.
4. according to the method described in claim 1, it is characterized in that:The step (III) the specific implementation process is as follows:
A) Mask of binaryzation is transformed to the digit of corresponding original image;
B) foreground region image of all original images in breast molybdenum target data set and its one-to-one same size is carried out
Matrix dot product operates, and the matrix after dot product operation is foreground image;
C) repeat dot product operation to all images in database, obtain the image training only comprising chest muscle and mammary gland
Collection.
5. according to the method described in claim 1, it is characterized in that:The step (IV) the specific implementation process is as follows:
A) input layer of 200 × 200 pixel sizes is added in;
B) a convolutional layer CNN containing 32 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
C) a pond layer containing 32 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
D) a convolutional layer CNN containing 32 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
E) a pond layer containing 32 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
F) a convolutional layer CNN containing 64 3 × 3 pixel size convolution kernels and using ReLU activation primitives is added in;
G) a pond layer containing 64 2 × 2 pixel size convolution kernels and using Maxpooling is added in;
H) a data complanation layer is added in;
I) 64 full articulamentums are added in;
J) add in one and abandon the data discarding layer that ratio is 0.5;
K) 4 full articulamentums are added in;
L) active coating using Softmax activation primitives is added in as output layer;
M) the deep learning frame of lightweight neutral net of the structure comprising above-mentioned 12 layers of all kinds of levels.
6. according to the method described in claim 1, it is characterized in that:The step (V) the specific implementation process is as follows:
A) 1 sample will be randomly choosed in training set image to input to deep learning frame, which is carried out at random to include rotation
Transformation change, width scale transformation, length scale conversion, cut out conversion including stochastic transformation, generate 32 corresponding samples;
B) 32 samples for generating gained at random are inputted to deep learning frame, and classification results are calculated automatically simultaneously by neutral net
Compared with true classification, corresponding accuracy and information loss value are obtained, preserves accuracy, information loss value and error 3
A parameter;
C) errors are fed back in neutral net and each convolution nuclear parameter in neutral net is modified;
D) sample for randomly choosing 1 sample input progress stochastic transformation in training set image again is extended, gained is random
32 samples of generation recalculate classification results and compared with true classification by revised network, and error is fed back to
Neutral net is modified, so Xun Huan 200 times, completes training process;
E) the obtained accuracy of each training process and penalty values are drawn into accuracy curve and loss curve respectively and preserved.
7. according to the method described in claim 1, it is characterized in that:The step (VI) the specific implementation process is as follows:
A) from top to bottom, each pixel of traversing graph picture from left to right, calculate each pixel and phase horizontally and vertically
Difference between adjacent pixel, and by obtain two difference values, obtain including the gradient of both horizontally and vertically change information;
B) gradient weights of single pixel are the inverse of its gradient, and the gradient weights of all pixels constitute and original image size
Consistent gradient weights image.
8. according to the method described in claim 1, it is characterized in that:The step (VII) the specific implementation process is as follows:
A) operation is eroded to gradient weights image, the diamond shape using size as 5 pixels is structural element object, and image is sealed
The edge of closed region erodes operation, removes the linear object that width in image is less than 10 pixels, will include breast and chest
The foreground area of portion's muscle is separated with most of manually chaff interferent;
B) it is less than the gradient weights image after the linear object of 10 pixels to removal width and carries out expansive working, using size as 5
The diamond shape of a pixel is structural element object, carries out expansive working to the edge of image closed area, recovers main body knot in image
Original border of structure;
C) since breast and chest muscle region are molybdenum target image subject structures, the knot of area maximum in gradient weights image is retained
Structure, the as only foreground area comprising breast and chest muscle, border between the border, that is, foreground and background in the region.
9. according to the method described in claim 1, it is characterized in that:The step (VIII) the specific implementation process is as follows:
A) foreground region image of binaryzation is transformed to the digit of corresponding original image;
B) Mask of all original images in breast molybdenum target data set and its one-to-one same size is subjected to matrix dot product
Operation, the matrix after dot product operation is foreground image;
C) repeat dot product operation to all images in database, obtain the image training only comprising chest muscle and mammary gland
Collection.
10. according to the method described in claim 1, it is characterized in that:The step (IX) the specific implementation process is as follows:
A) test image is inputted to the neutral net for completing training, classification results is calculated by neutral net automatically, it is complete
Into test process;
B) classification results calculated automatically with expert classification result are compared, calculate classification accuracy rate and recorded.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711343994.7A CN108052977B (en) | 2017-12-15 | 2017-12-15 | Mammary gland molybdenum target image deep learning classification method based on lightweight neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711343994.7A CN108052977B (en) | 2017-12-15 | 2017-12-15 | Mammary gland molybdenum target image deep learning classification method based on lightweight neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108052977A true CN108052977A (en) | 2018-05-18 |
CN108052977B CN108052977B (en) | 2021-09-14 |
Family
ID=62132261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711343994.7A Active CN108052977B (en) | 2017-12-15 | 2017-12-15 | Mammary gland molybdenum target image deep learning classification method based on lightweight neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108052977B (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830282A (en) * | 2018-05-29 | 2018-11-16 | 电子科技大学 | A kind of the breast lump information extraction and classification method of breast X-ray image |
CN109002831A (en) * | 2018-06-05 | 2018-12-14 | 南方医科大学南方医院 | A kind of breast density classification method, system and device based on convolutional neural networks |
CN109035267A (en) * | 2018-06-22 | 2018-12-18 | 华东师范大学 | A kind of image object based on deep learning takes method |
CN109636780A (en) * | 2018-11-26 | 2019-04-16 | 深圳先进技术研究院 | Breast density automatic grading method and device |
CN109840906A (en) * | 2019-01-29 | 2019-06-04 | 太原理工大学 | The method that a kind of pair of mammography carries out classification processing |
CN109902682A (en) * | 2019-03-06 | 2019-06-18 | 太原理工大学 | A kind of mammary gland x line image detection method based on residual error convolutional neural networks |
CN109919254A (en) * | 2019-03-28 | 2019-06-21 | 上海联影智能医疗科技有限公司 | Breast density classification method, system, readable storage medium storing program for executing and computer equipment |
CN109993732A (en) * | 2019-03-22 | 2019-07-09 | 杭州深睿博联科技有限公司 | The pectoral region image processing method and device of mammography X |
CN110059717A (en) * | 2019-03-13 | 2019-07-26 | 山东大学 | Convolutional neural networks automatic division method and system for breast molybdenum target data set |
CN110223280A (en) * | 2019-06-03 | 2019-09-10 | Oppo广东移动通信有限公司 | Phlebothrombosis detection method and phlebothrombosis detection device |
CN110232338A (en) * | 2019-05-29 | 2019-09-13 | 北京邮电大学 | Lightweight Web AR recognition methods and system based on binary neural network |
CN110490850A (en) * | 2019-02-14 | 2019-11-22 | 腾讯科技(深圳)有限公司 | A kind of lump method for detecting area, device and Medical Image Processing equipment |
CN110619947A (en) * | 2019-09-19 | 2019-12-27 | 南京工程学院 | Lung CT auxiliary screening system and method based on lightweight deep learning |
CN111091527A (en) * | 2018-10-24 | 2020-05-01 | 华中科技大学 | Method and system for automatically detecting pathological change area in pathological tissue section image |
WO2020107167A1 (en) * | 2018-11-26 | 2020-06-04 | 深圳先进技术研究院 | Method and apparatus for automatic grading of mammary gland density |
CN111401396A (en) * | 2019-01-03 | 2020-07-10 | 阿里巴巴集团控股有限公司 | Image recognition method and device |
CN111598862A (en) * | 2020-05-13 | 2020-08-28 | 北京推想科技有限公司 | Breast molybdenum target image segmentation method, device, terminal and storage medium |
CN111724450A (en) * | 2019-03-20 | 2020-09-29 | 上海科技大学 | Medical image reconstruction system, method, terminal and medium based on deep learning |
CN115909006A (en) * | 2022-10-27 | 2023-04-04 | 武汉兰丁智能医学股份有限公司 | Mammary tissue image classification method and system based on convolution Transformer |
US11636306B2 (en) * | 2018-05-21 | 2023-04-25 | Imagination Technologies Limited | Implementing traditional computer vision algorithms as neural networks |
CN116188488A (en) * | 2023-01-10 | 2023-05-30 | 广东省第二人民医院 | Gray gradient-based B-ultrasonic image focus region segmentation method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147857A (en) * | 2011-03-22 | 2011-08-10 | 黄晓华 | Image processing method for detecting similar round by using improved hough transformation |
CN102708550A (en) * | 2012-05-17 | 2012-10-03 | 浙江大学 | Blind deblurring algorithm based on natural image statistic property |
CN103985108A (en) * | 2014-06-03 | 2014-08-13 | 北京航空航天大学 | Method for multi-focus image fusion through boundary detection and multi-scale morphology definition measurement |
CN104683767A (en) * | 2015-02-10 | 2015-06-03 | 浙江宇视科技有限公司 | Fog penetrating image generation method and device |
-
2017
- 2017-12-15 CN CN201711343994.7A patent/CN108052977B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147857A (en) * | 2011-03-22 | 2011-08-10 | 黄晓华 | Image processing method for detecting similar round by using improved hough transformation |
CN102708550A (en) * | 2012-05-17 | 2012-10-03 | 浙江大学 | Blind deblurring algorithm based on natural image statistic property |
CN103985108A (en) * | 2014-06-03 | 2014-08-13 | 北京航空航天大学 | Method for multi-focus image fusion through boundary detection and multi-scale morphology definition measurement |
CN104683767A (en) * | 2015-02-10 | 2015-06-03 | 浙江宇视科技有限公司 | Fog penetrating image generation method and device |
Non-Patent Citations (2)
Title |
---|
ALEX KRIZHEVSKY: "ImageNet Classification with Deep Convolutional Neural Networks", 《COMMUNICATIONS OF THE ACM》 * |
王洪亮: "改进的梯度倒数加权算法在图像平滑中的应用", 《红外技术》 * |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11636306B2 (en) * | 2018-05-21 | 2023-04-25 | Imagination Technologies Limited | Implementing traditional computer vision algorithms as neural networks |
CN108830282A (en) * | 2018-05-29 | 2018-11-16 | 电子科技大学 | A kind of the breast lump information extraction and classification method of breast X-ray image |
CN109002831A (en) * | 2018-06-05 | 2018-12-14 | 南方医科大学南方医院 | A kind of breast density classification method, system and device based on convolutional neural networks |
CN109035267A (en) * | 2018-06-22 | 2018-12-18 | 华东师范大学 | A kind of image object based on deep learning takes method |
CN111091527A (en) * | 2018-10-24 | 2020-05-01 | 华中科技大学 | Method and system for automatically detecting pathological change area in pathological tissue section image |
CN111091527B (en) * | 2018-10-24 | 2022-07-05 | 华中科技大学 | Method and system for automatically detecting pathological change area in pathological tissue section image |
WO2020107167A1 (en) * | 2018-11-26 | 2020-06-04 | 深圳先进技术研究院 | Method and apparatus for automatic grading of mammary gland density |
CN109636780A (en) * | 2018-11-26 | 2019-04-16 | 深圳先进技术研究院 | Breast density automatic grading method and device |
CN111401396B (en) * | 2019-01-03 | 2023-04-18 | 阿里巴巴集团控股有限公司 | Image recognition method and device |
CN111401396A (en) * | 2019-01-03 | 2020-07-10 | 阿里巴巴集团控股有限公司 | Image recognition method and device |
CN109840906A (en) * | 2019-01-29 | 2019-06-04 | 太原理工大学 | The method that a kind of pair of mammography carries out classification processing |
CN110490850B (en) * | 2019-02-14 | 2021-01-08 | 腾讯科技(深圳)有限公司 | Lump region detection method and device and medical image processing equipment |
CN110490850A (en) * | 2019-02-14 | 2019-11-22 | 腾讯科技(深圳)有限公司 | A kind of lump method for detecting area, device and Medical Image Processing equipment |
CN109902682A (en) * | 2019-03-06 | 2019-06-18 | 太原理工大学 | A kind of mammary gland x line image detection method based on residual error convolutional neural networks |
CN110059717A (en) * | 2019-03-13 | 2019-07-26 | 山东大学 | Convolutional neural networks automatic division method and system for breast molybdenum target data set |
CN111724450A (en) * | 2019-03-20 | 2020-09-29 | 上海科技大学 | Medical image reconstruction system, method, terminal and medium based on deep learning |
CN109993732A (en) * | 2019-03-22 | 2019-07-09 | 杭州深睿博联科技有限公司 | The pectoral region image processing method and device of mammography X |
CN109919254A (en) * | 2019-03-28 | 2019-06-21 | 上海联影智能医疗科技有限公司 | Breast density classification method, system, readable storage medium storing program for executing and computer equipment |
CN110232338A (en) * | 2019-05-29 | 2019-09-13 | 北京邮电大学 | Lightweight Web AR recognition methods and system based on binary neural network |
CN110232338B (en) * | 2019-05-29 | 2021-02-05 | 北京邮电大学 | Lightweight Web AR (augmented reality) identification method and system based on binary neural network |
CN110223280B (en) * | 2019-06-03 | 2021-04-13 | Oppo广东移动通信有限公司 | Venous thrombosis detection method and venous thrombosis detection device |
CN110223280A (en) * | 2019-06-03 | 2019-09-10 | Oppo广东移动通信有限公司 | Phlebothrombosis detection method and phlebothrombosis detection device |
CN110619947A (en) * | 2019-09-19 | 2019-12-27 | 南京工程学院 | Lung CT auxiliary screening system and method based on lightweight deep learning |
CN111598862A (en) * | 2020-05-13 | 2020-08-28 | 北京推想科技有限公司 | Breast molybdenum target image segmentation method, device, terminal and storage medium |
CN115909006A (en) * | 2022-10-27 | 2023-04-04 | 武汉兰丁智能医学股份有限公司 | Mammary tissue image classification method and system based on convolution Transformer |
CN115909006B (en) * | 2022-10-27 | 2024-01-19 | 武汉兰丁智能医学股份有限公司 | Mammary tissue image classification method and system based on convolution transducer |
CN116188488A (en) * | 2023-01-10 | 2023-05-30 | 广东省第二人民医院 | Gray gradient-based B-ultrasonic image focus region segmentation method and device |
CN116188488B (en) * | 2023-01-10 | 2024-01-16 | 广东省第二人民医院(广东省卫生应急医院) | Gray gradient-based B-ultrasonic image focus region segmentation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108052977B (en) | 2021-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108052977A (en) | Breast molybdenum target picture depth study classification method based on lightweight neutral net | |
Miki et al. | Classification of teeth in cone-beam CT using deep convolutional neural network | |
CN109685811B (en) | PET/CT high-metabolism lymph node segmentation method based on dual-path U-net convolutional neural network | |
CN104992430B (en) | Full automatic three-dimensional liver segmentation method based on convolutional neural networks | |
CN108257135A (en) | The assistant diagnosis system of medical image features is understood based on deep learning method | |
CN107622492A (en) | Lung splits dividing method and system | |
CN113781439B (en) | Ultrasonic video focus segmentation method and device | |
CN107230206A (en) | A kind of 3D Lung neoplasm dividing methods of the super voxel sequence lung images based on multi-modal data | |
CN103562960B (en) | For generating the assigned unit between the image-region of image and element class | |
CN103679801B (en) | A kind of cardiovascular three-dimensional rebuilding method based on various visual angles X-ray | |
CN107958471A (en) | CT imaging methods, device, CT equipment and storage medium based on lack sampling data | |
CN109035284A (en) | Cardiac CT image dividing method, device, equipment and medium based on deep learning | |
CN107563434A (en) | A kind of brain MRI image sorting technique based on Three dimensional convolution neutral net, device | |
Lin et al. | Tooth numbering and condition recognition on dental panoramic radiograph images using CNNs | |
Zhu et al. | Metal artifact reduction for X-ray computed tomography using U-net in image domain | |
Wang et al. | Functional and anatomical image fusion based on gradient enhanced decomposition model | |
CN106127783A (en) | A kind of medical imaging identification system based on degree of depth study | |
Pradhan et al. | Lung cancer detection using 3D convolutional neural networks | |
Yan et al. | Improved mask R-CNN for lung nodule segmentation | |
CN109741254A (en) | Dictionary training and Image Super-resolution Reconstruction method, system, equipment and storage medium | |
CN116228624A (en) | Multi-mode constitution component marking and analyzing method based on artificial intelligence technology | |
Banjšak et al. | Implementation of artificial intelligence in chronological age estimation from orthopantomographic X-ray images of archaeological skull remains | |
Zhao et al. | Study of image segmentation algorithm based on textural features and neural network | |
CN108038840A (en) | A kind of image processing method, device, image processing equipment and storage medium | |
Du et al. | Mandibular canal segmentation from CBCT image using 3D convolutional neural network with scSE attention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |