CN109840906A - The method that a kind of pair of mammography carries out classification processing - Google Patents

The method that a kind of pair of mammography carries out classification processing Download PDF

Info

Publication number
CN109840906A
CN109840906A CN201910083214.2A CN201910083214A CN109840906A CN 109840906 A CN109840906 A CN 109840906A CN 201910083214 A CN201910083214 A CN 201910083214A CN 109840906 A CN109840906 A CN 109840906A
Authority
CN
China
Prior art keywords
mammography
data
inception
image
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910083214.2A
Other languages
Chinese (zh)
Inventor
李灯熬
赵菊敏
李雪梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN201910083214.2A priority Critical patent/CN109840906A/en
Publication of CN109840906A publication Critical patent/CN109840906A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The method that a kind of pair of mammography of the present invention carries out classification processing, belong to the technical field that classification processing is carried out to mammography, the technical problem to be solved is that provide a kind of method for carrying out classification processing to mammography using GoogLeNet Inception V4, the scheme of use is the following steps are included: the first step, pre-processes mammography;Second step, data enhancing, that is, increase the data of mammography;Third step is trained using depth convolutional neural networks;4th step, the classification of mammography;The present invention is suitable for the field of mammography classification processing.

Description

The method that a kind of pair of mammography carries out classification processing
Technical field
The method that a kind of pair of mammography of the present invention carries out classification processing, belongs to and carries out at classification to mammography The technical field of reason.
Background technique
Breast cancer (Breast cancer) is a kind of cancer recurrent in female group, makes a definite diagnosis cancer all In reach 25% ratio and world wide in the cancer that is number two.Up to the present, there are no active and effective pre- The method of anti-breast cancer, the diagnosis of early stage and timely treatment are to improve the only method of patient's survival rate.Screening breast X Line image (Mammography), being also breast molybdenum target x-ray photo is one of most effective tool in early diagnosing mammary cancer, is faced Bed doctor is examined by suspicious lump (Mass) and other structures, such as Microcalcification stove (Micro-calcification) The disconnected state of an illness.There are two main performances in mastography for breast cancer: it is the presence of pernicious soft tissue or lump first, second It is the presence of microcalcifications.There are larger differences for the area and contrast of breast lump, and are easy by artifact and surrounding gland The interference of body tissue;Microcalcifications are characterized in usual very little, it is easy to be slipped or mistaken diagnosis, these all greatly affected doctor The precision of raw diagnosis.
Mammography is mainly artificial observation, classification and judgement, is voluntarily diagnosed by rule of thumb, and error happens occasionally, seriously When influence doctor-patient relationship.
Summary of the invention
The present invention overcomes the shortcomings of the prior art, technical problem to be solved are as follows: a kind of to use GoogLeNet
The method that Inception V4 carries out classification processing to mammography assists related personnel to carry out mammography Objective identification, Accurate classification.
In order to solve the above-mentioned technical problem, the technical solution adopted by the present invention are as follows: a kind of pair of mammography is classified The method of processing, comprising the following steps:
The first step pre-processes mammography
Extra background in mammography is removed, steps are as follows:
A. the pixel value of all rows of image is added to obtain one group of numerical value;
B. setting threshold value is 0, removes the part that step a intermediate value is 0, and retention is greater than 0 part;
C. above step is repeated to the pixel value of all column of image;
The image obtained from is the image for removing extra background;
Second step, data enhancing, that is, increase the data of mammography, steps are as follows:
The mammography scale stochastic transformation of extra background will be removed to 350mm × 350mm;
D. random repeatedly to cut 299mm × 299mm size area;
E. Random Level overturns 180 degree, increases picture number;
F. Gaussian noise is added;
G. regularization;
D is repeated several times to g step, the data of mammography are enhanced
Third step is trained using depth convolutional neural networks
The enhanced mammography of data is trained using GoogLeNet Inception V4, establishes a mammary gland X Line chart piece big data network model,
GoogLeNet is a kind of depth convolutional neural networks for developing formation on the basis of LeNet, and depth convolutional neural networks are adopted With Inception V4 structure;
The Inception V4 structure using residual error connection Residual Connection by Inception module with it is residual Difference connection (ResNet) combines;
4th step, the classification of mammography
To re-enter mammography and using depth convolutional neural networks be trained the big data network model that obtains into Row compares, and is classified accordingly.
Further, mammary X-ray figure is increased using rotation, scaling, translation, cutting and mirror image conversion in the second step The data set of piece.
Compared with the prior art, the invention has the following beneficial effects:
Convolutional neural networks are added to mammography and carried out in sorting technique by the present invention, powerful using computer Information processing capability, identification and classification mammography, auxiliary doctor judges the good evil possibility of breast lesion, in medicine shadow As field is of great significance.
The method that the present invention has studied the mammography that identifies and classify based on depth convolutional neural networks, first The mammography of acquisition is pre-processed, then the problem of deficiency carries out data enhancing (Data for data sets It Augmentation), finally will be using data set training neural network, to achieve the purpose that assist doctor's quick diagnosis, to face The diagnosing and treating of bed provides a kind of new effective method.
Detailed description of the invention
The present invention will be further described in detail with reference to the accompanying drawing;
Fig. 1 is flow chart of the invention.
Fig. 2 is GoogLeNet Inception V4 model structure schematic diagram.
Fig. 3 is the residual error structural schematic diagram of ResNet.
Specific embodiment
The present invention is described further in conjunction with Fig. 1, Fig. 2 and Fig. 3: a kind of to use GoogLeNet Inception V4 couple The method of mammography progress classification processing, the specific steps are as follows:
1. pair mammography carries out pretreated step
The mammography of hospital's acquisition generally all includes more background area, these backgrounds do not include useful information, and And the training process of neural network is not also helped.Calculation amount can be reduced by removing extra background, promote network performance.So This partial region can be removed before picture is inputted neural network, the specific method is as follows:
(1) pixel value of all rows of image is added to obtain one group of numerical value;
(2) setting threshold value is 0, removes the part that step (1) intermediate value is 0, and retention is greater than 0 part;
(3) above step is repeated to the pixel value of all column of image;
Finally obtained image is the image for removing extra background.
2. pair mammography carries out the step of data enhancing
Training neural network needs comparatively large piece of data collection, however in practice, due to hospital's acquisition mammography X data consumption When effort, and at this stage lack unified standard and other various problems, cause the problem that data are rare.This The enhancing of invention proposed adoption data solves data scarcity problem.
Data enhancing is a kind of technology for being usually used in deep learning, refers to the mistake that new samples are generated from existing data Journey, it is rare and prevent over-fitting so as to improve data.In object recognition task in natural image, letter is usually only carried out Single flip horizontal, but for tasks such as optical character identifications, it has been shown that flexible deformation can greatly improve performance.
The common method of data enhancing has:
(1) translation transformation;
(2) horizontal, flip vertical;
(3) change of scale;
(4) random cropping, scaling;
(5) color, light change;
(6) Gaussian noise, Fuzzy Processing, etc..
Mammography is rotation, scaling, the number of translation and occlusive tissue in the Main change source of lesion level Amount.So carrying out data enhancing conversion to the original image of each training to increase effective size of data set, to increase Training set size mitigates the relatively small influence of training set scale.Specific step is as follows:
(1) change of scale is to 350mm × 350mm size
Use transforms.Resize(350) realize transforming image dimension, next step random cropping is in 350mm size It is cut on picture.
(2) random cropping 299mm × 299mm size area
GoogLenet needs the input of 299 sizes, realizes random cropping using transforms.RandomCrop (299), with Increase picture number.
(3) Random Level overturns 180 degree
Using transforms.RandomHorizontalFlip () function, 299 size pictures after making random cropping are with 0.5 Probability Random Level overturning, to increase picture number.
(4) Gaussian noise is added
Gaussian noise is added using OpenCV.
(5) regularization
Use transforms.Normalize ([0.485,0.456,0.406], [0.229,0.224,0.225]) function reality Now be added noise after image regularization.
(1) to (5) step is repeated several times, the data of mammography are enhanced.
3. establishing the step of depth convolutional neural networks network model training
The present invention uses GoogLeNet Inception V4 to classify to help doctor to realize mammary gland mammography The good pernicious quick diagnosis of lesion.GoogLeNet is a kind of depth convolutional neural networks for developing formation on the basis of LeNet, it Structure have 22 layers, advance therein is to apply Inception structure.Inception structure is using in network Effect is preferable in terms of computing resource, it all increased in terms of the width of network and depth, but not increase computational load. At the same time, in order to which the quality for making network reaches more preferably effect, GoogLeNet uses Hebbian principle and multiple dimensioned place Reason, either in terms of classification or context of detection all obtains preferable result.
GoogLeNet is a kind of network for increasing Inception structure, this is also its superiority place. Inception original node is also a network, i.e. Webweb (Network In Network) structure.GoogLeNet increases After having added Inception structure, " base neural member " structure is just formd, thus to build a new network, this net Network not only maintains the sparsity of network structure, also uses the high computational network structure of dense matrix.Inception V4 It is to be evolved on the basis of Inception V1, Inception V2, Inception V3, it is improved Inception module is connect by the structure of Inception V3 using residual error connection (Residual Connection) with residual error (ResNet) it combines.ResNet structure has greatly deepened network depth, also greatly improves training speed, while performance There is promotion.
The residual error structural schematic diagram of ResNet in GoogLeNet Inception V4 as shown in Figure 3, wherein 1 × 1 The effect of convolution is dimensionality reduction, is added after Inception, and the width and depth of depth convolutional network are all increased, thus Also the performance of whole network is made to obtain 2-3 times of promotion.
GoogLeNet Inception V4 has the characteristics that following:
(1) GoogLeNet Inception V4 is with 22 layers of deep structure in terms of depth, and gradient disappears in order to prevent, GoogLeNet increases loss function in different positions.
(2) a variety of cores 1 × 1,3 × 3,5 × 5 and pond layer are increased in terms of width, but if only simply will These are applied on characteristic pattern, and the characteristic pattern thickness for merging formation will be very big.In order to avoid this phenomenon, Inception V4 added 1 × 1 convolution kernel after 3 × 3 cores, 5 × 5 cores and pond layer respectively respectively, to play drop The effect of low characteristic pattern thickness.
The specific steps of present invention identification and classification mammography are as follows:
(1) collected mammography removes background parts by pretreatment;
(2) increase each training image effective size of data set using rotation, scaling, translation, cutting and mirror image conversion, To increase training set size, mitigate the relatively small influence of training set scale;
(3) 10 parts will be randomly divided by the mammography of pretreatment and data enhancing, be instructed using ten times of cross validations Practice and tests to obtain accurate network model;
(4) classified using trained model.
After the classification of above-mentioned 4 step, convolutional neural networks model is made full use of, quickly, objectively obtains mammography Classification results, so that doctor be assisted to show that diagnosis provides help.
Convolutional neural networks are added to mammography and carried out in sorting technique by the present invention, strong using computer Big information processing capability, identification and classification mammography, auxiliary doctor judge the good evil possibility of breast lesion, are curing Image field is learned to be of great significance.
The method that the present invention has studied the mammography that identifies and classify based on depth convolutional neural networks, first The mammography of acquisition is pre-processed, then the problem of deficiency carries out data enhancing (Data for data sets It Augmentation), finally will be using data set training neural network, to achieve the purpose that assist doctor's quick diagnosis, to face The diagnosing and treating of bed provides a kind of new effective householder method.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with A variety of variations, modification, replacement can be carried out to these embodiments without departing from the principles and spirit of the present invention by understanding And modification, the scope of the present invention is defined by the appended.

Claims (2)

1. the method that a kind of pair of mammography carries out classification processing, it is characterised in that: the following steps are included:
The first step pre-processes mammography
Extra background in mammography is removed, steps are as follows:
A. the pixel value of all rows of image is added to obtain one group of numerical value;
B. setting threshold value is 0, removes the part that step a intermediate value is 0, and retention is greater than 0 part;
C. above step is repeated to the pixel value of all column of image;
The image obtained from is the image for removing extra background;
Second step, data enhancing, that is, increase the data of mammography, steps are as follows:
The mammography scale stochastic transformation of extra background will be removed to 350mm × 350mm;
D. random repeatedly to cut 299mm × 299mm size area;
E. Random Level overturns 180 degree, increases picture number;
F. Gaussian noise is added;
G. regularization;
D is repeated several times to g step, the data of mammography are enhanced;
Third step is trained using depth convolutional neural networks
The enhanced mammography of data is trained using GoogLeNet Inception V4, establishes a mammary gland X Line chart piece big data network model,
GoogLeNet is a kind of depth convolutional neural networks for developing formation on the basis of LeNet, and depth convolutional neural networks are adopted With Inception V4 structure;
The Inception V4 structure using residual error connection Residual Connection by Inception module with it is residual Difference connection (ResNet) combines;
4th step, the classification of mammography
To re-enter mammography and using depth convolutional neural networks be trained the big data network model that obtains into Row compares, and is classified accordingly.
2. the method that a kind of pair of mammography according to claim 1 carries out classification processing, it is characterised in that: described Increase the data set of mammography in second step using rotation, scaling, translation, cutting and mirror image conversion.
CN201910083214.2A 2019-01-29 2019-01-29 The method that a kind of pair of mammography carries out classification processing Pending CN109840906A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910083214.2A CN109840906A (en) 2019-01-29 2019-01-29 The method that a kind of pair of mammography carries out classification processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910083214.2A CN109840906A (en) 2019-01-29 2019-01-29 The method that a kind of pair of mammography carries out classification processing

Publications (1)

Publication Number Publication Date
CN109840906A true CN109840906A (en) 2019-06-04

Family

ID=66884281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910083214.2A Pending CN109840906A (en) 2019-01-29 2019-01-29 The method that a kind of pair of mammography carries out classification processing

Country Status (1)

Country Link
CN (1) CN109840906A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021164640A1 (en) * 2020-02-19 2021-08-26 京东方科技集团股份有限公司 Retinal image recognition method and apparatus, electronic device, and storage medium
CN113627459A (en) * 2021-03-30 2021-11-09 太原理工大学 Rectal cancer pathological section image classification method and device based on Incepton network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052977A (en) * 2017-12-15 2018-05-18 福建师范大学 Breast molybdenum target picture depth study classification method based on lightweight neutral net
CN108596882A (en) * 2018-04-10 2018-09-28 中山大学肿瘤防治中心 The recognition methods of pathological picture and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052977A (en) * 2017-12-15 2018-05-18 福建师范大学 Breast molybdenum target picture depth study classification method based on lightweight neutral net
CN108596882A (en) * 2018-04-10 2018-09-28 中山大学肿瘤防治中心 The recognition methods of pathological picture and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PYTORCH: "Pytorch CN Documentation", 《HTTPS://PYTORCH-CN.READTHEDOCS.IO/ZH/STABLE/》 *
TENSORSENSE: "PyTorch学习笔记(三):transforms的二十二个方法", 《HTTPS://BLOG.CSDN.NET/U011995719/ARTICLE/DETAILS/85107009?OPS_REQUEST_MISC=%257B%2522REQUEST%255FID%2522%253A%2522164888940116780271921391%2522%252C%2522SCM%2522%253A%252220140713.130102334..%2522%257D&REQUEST_ID=164888940116780271921391&BI》 *
中国人工智能学会: "《中国人工智能进展 2003 中国人工智能学会第10届全国学术年会论文集》", 30 November 2003, 中国邮电大学出版社 *
吴国平: "《数字图像处理原理》", 30 September 2007, 武汉:中国地质大学出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021164640A1 (en) * 2020-02-19 2021-08-26 京东方科技集团股份有限公司 Retinal image recognition method and apparatus, electronic device, and storage medium
US11967181B2 (en) 2020-02-19 2024-04-23 Boe Technology Group Co., Ltd. Method and device for retinal image recognition, electronic equipment, and storage medium
CN113627459A (en) * 2021-03-30 2021-11-09 太原理工大学 Rectal cancer pathological section image classification method and device based on Incepton network

Similar Documents

Publication Publication Date Title
CN108198184B (en) Method and system for vessel segmentation in contrast images
CN110490892A (en) A kind of Thyroid ultrasound image tubercle automatic positioning recognition methods based on USFaster R-CNN
CN108416360B (en) Cancer diagnosis system and method based on breast molybdenum target calcification features
CN110675411B (en) Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN110276356A (en) Eye fundus image aneurysms recognition methods based on R-CNN
CN107657602A (en) Based on the breast structure disorder recognition methods for migrating convolutional neural networks twice
CN110084803A (en) Eye fundus image method for evaluating quality based on human visual system
Li et al. Convolutional neural networks for intestinal hemorrhage detection in wireless capsule endoscopy images
CN110188792A (en) The characteristics of image acquisition methods of prostate MRI 3-D image
Saranyaraj et al. A deep convolutional neural network for the early detection of breast carcinoma with respect to hyper-parameter tuning
CN111104961A (en) Method for classifying breast cancer based on improved MobileNet network
CN113658201B (en) Deep learning colorectal cancer polyp segmentation device based on enhanced multi-scale features
CN106780453A (en) A kind of method realized based on depth trust network to brain tumor segmentation
Cao et al. Gastric cancer diagnosis with mask R-CNN
CN110390678B (en) Tissue type segmentation method of colorectal cancer IHC staining image
CN115471701A (en) Lung adenocarcinoma histology subtype classification method based on deep learning and transfer learning
CN109840906A (en) The method that a kind of pair of mammography carries out classification processing
CN114398979A (en) Ultrasonic image thyroid nodule classification method based on feature decoupling
Lyu et al. Fundus image based retinal vessel segmentation utilizing a fast and accurate fully convolutional network
Rasheed et al. Use of transfer learning and wavelet transform for breast cancer detection
Guo et al. Thyroid nodule ultrasonic imaging segmentation based on a deep learning model and data augmentation
CN116188352A (en) Pulmonary nodule segmentation method based on enhanced edge features
Valério et al. Lesions multiclass classification in endoscopic capsule frames
CN109948706B (en) Micro-calcification cluster detection method combining deep learning and feature multi-scale fusion
Sowmya et al. Vision transformer based ResNet model for pneumonia prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190604

RJ01 Rejection of invention patent application after publication