Summary of the invention
The main purpose of the application is to provide the pectoral region image processing method and device of a kind of mammography X,
To solve the problems, such as that dividing processing is ineffective.
To achieve the goals above, according to the one aspect of the application, a kind of pectoral region of mammography X is provided
Image processing method.
Pectoral region image processing method according to the mammography X of the application includes: input mammary X-ray figure to be processed
Picture;The fisrt feature figure of the mammography X to be processed is obtained by the first default convolutional neural networks;It is special by described first
Sign figure splices with the second feature figure after global context information coding and sampling processing;And pass through the second default volume
Product network carries out Fusion Features, and obtained original image size is as the forecast image result to the pectoral region image.
Further, the global context information coding of passing through includes: to be encoded on global using global average Chi Hualai
Context information.
Further, the sampling processing includes: up-sampling treatment.
Further, by the described first default convolutional neural networks, for obtaining in default convolutional neural networks most
The characteristic pattern exported on the latter convolutional layer.
Further, by the described second default convolutional network, for convolution fusion feature and by being upsampled to original image
The characteristic pattern of size.
To achieve the goals above, according to the another aspect of the application, a kind of pectoral region of mammography X is provided
Image processing apparatus.
Include: input module according to the pectoral region image processing apparatus of the mammography X of the application, for input to
Handle mammography X;Characteristic extracting module, for obtaining the mammary gland X to be processed by the first default convolutional neural networks
The fisrt feature figure of line image;Splicing module, for by the fisrt feature figure and by global context information coding and
Second feature figure splicing after sampling processing;And Fusion Features module, for carrying out feature by the second default convolutional network
Fusion, obtained original image size is as the forecast image result to the pectoral region image.
Further, it in the splicing module, is also used to using global average Chi Hualai coding global context information.
Further, it in the splicing module, is also used to carry out up-sampling treatment.
Further, in the characteristic extracting module, it is also used to obtain the last one volume in default convolutional neural networks
The characteristic pattern exported on lamination.
Further, in the Fusion Features module, it is also used to convolution fusion feature and by being upsampled to original image size
Characteristic pattern.
In the embodiment of the present application, by the way of inputting mammography X to be processed, pass through the first default convolutional Neural
Network obtains the fisrt feature figure of the mammography X to be processed, has reached by the fisrt feature figure and by the overall situation
The purpose of second feature figure splicing after context information coding and sampling processing, to realize through the second default convolution net
Network progress Fusion Features, technical effect of the obtained original image size as the forecast image result to the pectoral region image,
And then solves the ineffective technical problem of dividing processing.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
The embodiment of the application a part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people
Member's every other embodiment obtained without making creative work, all should belong to the model of the application protection
It encloses.
It should be noted that the description and claims of this application and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to embodiments herein described herein.In addition, term " includes " and " tool
Have " and their any deformation, it is intended that cover it is non-exclusive include, for example, containing a series of steps or units
Process, method, system, product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include without clear
Other step or units listing to Chu or intrinsic for these process, methods, product or equipment.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
As shown in Figure 1, this method includes the following steps, namely S102 to step S108:
Step S102 inputs mammography X to be processed;
The mammography X includes loxosis view (MLO) and another kind position view (CC) end to end, and pectoral region image
It is segmented on loxosis view (MLO) and carries out.
Step S104 obtains the fisrt feature of the mammography X to be processed by the first default convolutional neural networks
Figure;
As the CNN convolutional neural networks occurred in the network architecture, i.e., the first default convolutional neural networks are used for
After mammography X to be processed is input to the first default convolutional neural networks, output obtains characteristic pattern.
Step S106, by the fisrt feature figure and second after global context information coding and sampling processing
Characteristic pattern splicing;And
The contextual information for not accounting for image when due to dividing in the prior art for chest muscle, for pectoral region and
Contrast low sample decomposition effect in mammary region is poor.
By obtaining second feature by the fisrt feature figure and by global context information coding and by sampling
Figure, then splices fisrt feature figure and second feature figure again.
Specifically, by the way that the original fisrt feature figure is increased global context information coding by pond layer,
New characteristic pattern is obtained after default sampling processing later as second feature figure, later by the i.e. first spy of original characteristic pattern
Sign figure and the treated second feature of process are spliced.
It should be noted that the global context information is chest muscle in the location of full figure and surrounding ring
Border information etc..
It should be noted that can be by there is the convolutional neural networks of supervision by global context information coding process
Acquistion is arrived, and those skilled in the art can encode by other means, as long as can meet coding requirement.
Step S108 carries out Fusion Features by the second default convolutional network, and obtained original image size is used as to the chest
The forecast image result of flesh area image.
After carrying out Fusion Features by the described second default convolutional network, the size of the available original image is used as to institute
The forecast image result for stating pectoral region image refers to, obtains the black and white of size identical as mammography X to be processed
Bianry image is as forecast image, and wherein pectoral region is white, and non-pectoral region is black.
For above-mentioned semantic segmentation network the problem of mammography X chest muscle is divided, using will be in the overall situation
The network model that context information is taken into account, and it is named as global context network (Global Context Net, GCNet),
Specifically, the core network of GCNet is all made of the ResNet50 of the pre-training on ImageNet database, and deep learning frame is adopted
With pytorch, hardware platform is Nvidia Titan XP.GCNet network training uses stochastic gradient descent method, wherein criticizing place
Managing size (batch size) is 16, initial learning rate 0.005, momentum parameter 0.9, weight attenuation parameter (weight decay)
It is 0.0001.Model training is total to iteration 30 times, saves the optimal model of verification result for testing.Meanwhile in order to alleviate model
Overfitting problem, the application uses Random-Rotation, and the modes such as random overturning and random scale scaling expand training set.
Preferably, the global context information coding of passing through includes: to encode the overall situation up and down using global average Chi Hualai
Literary information.
Preferably, the sampling processing includes: up-sampling treatment.
Preferably, last in default convolutional neural networks for obtaining by the described first default convolutional neural networks
The characteristic pattern exported on one convolutional layer.
Preferably, for convolution fusion feature and big by being upsampled to original image by the described second default convolutional network
Small characteristic pattern.
It can be seen from the above description that the application realizes following technical effect:
In the embodiment of the present application, by the way of inputting mammography X to be processed, pass through the first default convolutional Neural
Network obtains the fisrt feature figure of the mammography X to be processed, has reached by the fisrt feature figure and by the overall situation
The purpose of second feature figure splicing after context information coding and sampling processing, to realize through the second default convolution net
Network progress Fusion Features, technical effect of the obtained original image size as the forecast image result to the pectoral region image,
And then solves the ineffective technical problem of dividing processing.
It should be noted that step shown in the flowchart of the accompanying drawings can be in such as a group of computer-executable instructions
It is executed in computer system, although also, logical order is shown in flow charts, and it in some cases, can be with not
The sequence being same as herein executes shown or described step.
According to the embodiment of the present application, additionally provide a kind of for implementing the pectoral region of the mammography X of the above method
Image processing apparatus, as shown in Fig. 2, the device includes: input module 10, for inputting mammography X to be processed;Feature mentions
Modulus block 20, for obtaining the fisrt feature figure of the mammography X to be processed by the first default convolutional neural networks;It spells
Connection module 30, for by the fisrt feature figure and second feature after global context information coding and sampling processing
Figure splicing;And Fusion Features module 40, for carrying out Fusion Features, obtained original image size by the second default convolutional network
As the forecast image result to the pectoral region image.
Mammography X described in the input module 10 of the embodiment of the present application includes loxosis view (MLO) and another head
Tail position view (CC), and pectoral region image segmentation carries out on loxosis view (MLO).
First default convolutional neural networks described in the characteristic extracting module 20 of the embodiment of the present application are used for as in network
The CNN convolutional neural networks occurred in structure, i.e., by the way that mammography X to be processed is input to the first default convolutional Neural net
After network, output obtains characteristic pattern.
Due to not accounting for image when dividing in the prior art for chest muscle in the splicing module 30 of the embodiment of the present application
Contextual information, the sample decomposition effect low for pectoral region and mammary region contrast be poor.
By obtaining second feature by the fisrt feature figure and by global context information coding and by sampling
Figure, then splices fisrt feature figure and second feature figure again.
Specifically, by the way that the original fisrt feature figure is increased global context information coding by pond layer,
New characteristic pattern is obtained after default sampling processing later as second feature figure, later by the i.e. first spy of original characteristic pattern
Sign figure and the treated second feature of process are spliced.
It should be noted that the global context information is chest muscle in the location of full figure and surrounding ring
Border information etc..
It should be noted that can be by there is the convolutional neural networks of supervision by global context information coding process
Acquistion is arrived, and those skilled in the art can encode by other means, as long as can meet coding requirement.
After carrying out Fusion Features by the described second default convolutional network in the Fusion Features module 40 of the embodiment of the present application,
The size of the available original image refers to as the forecast image result to the pectoral region image, obtain with it is to be processed to
The black and white binary image of the identical size of mammography X is handled as forecast image, wherein pectoral region is white, non-pectoral region
Domain is black.
For above-mentioned semantic segmentation network the problem of mammography X chest muscle is divided, using will be in the overall situation
The network model that context information is taken into account, and it is named as global context network G CNet, Global Context Net, have
Body, the core network of GCNet is all made of the ResNet50 of the pre-training on ImageNet database, and deep learning frame uses
Pytorch, hardware platform are Nvidia Titan XP.GCNet network training uses stochastic gradient descent method, wherein batch processing
Size batch size is 16, initial learning rate 0.005, momentum parameter 0.9, and weight attenuation parameter weight decay is
0.0001.Model training is total to iteration 30 times, saves the optimal model of verification result for testing.Meanwhile in order to alleviate model
Overfitting problem, the application use Random-Rotation, and the modes such as random overturning and random scale scaling expand training set.
Preferably, it in the splicing module, is also used to using global average Chi Hualai coding global context information.
Preferably, it in the splicing module, is also used to carry out up-sampling treatment.
Preferably, in the characteristic extracting module, it is also used to obtain the last one convolution in default convolutional neural networks
The characteristic pattern exported on layer.
Preferably, in the Fusion Features module, it is also used to convolution fusion feature and by being upsampled to original image size
Characteristic pattern.
Preferably, the described first default convolutional neural networks or the second default convolutional neural networks can using FCN,
Any or a variety of semantic segmentation subnetwork in SegNet or U-Net.
The realization principle of the application is as follows:
Due to carrying out full-automatic chest muscle point to mammography X using existing some deep learning semantic segmentation models
It cuts, such as FCN, SegNet and U-Net etc., however these models are the Direct Classifications to image pixel, due to not accounting for figure
The contextual information of picture, the sample decomposition effect low for pectoral region and mammary region contrast are poor.To in pectoral region
Domain and partial breast region result in U-Net and are lumped together in intensity without notable difference, can not accomplish effectively to divide
It cuts.
In view of the above-mentioned problems, implementation method is as follows in embodiments herein:
Mammography X to be processed is inputted first;Then it is obtained by the first default convolutional neural networks described to be processed
The fisrt feature figure of mammography X;Later by the fisrt feature figure and by global context information coding and sampling
Second feature figure splicing that treated;And Fusion Features are carried out finally by the second default convolutional network, obtained original image is big
The small forecast image result as to the pectoral region image.
For above-mentioned semantic segmentation network the problem of mammography X chest muscle is divided, devise a kind of new
The network that global context information is taken into account, and be named as global context network (Global Context Net,
GCNet), network structure is as shown in Figure 3.Given input picture (a), obtains last using convolutional neural networks CNN first
The output characteristic pattern (b) of a convolutional layer is then encoded global context and is believed by the overall situation that is simple and efficient of the design Chi Hualai that be averaged
Breath is spliced by up-sampling and characteristic pattern (b), finally by convolution fusion feature and is upsampled to original image size acquisition prognostic chart
Picture.
Specifically, the core network of GCNet is all made of the ResNet50 of the pre-training on ImageNet database, depth
It practises frame and uses pytorch, hardware platform is Nvidia Titan XP.Network training uses stochastic gradient descent method, wherein criticizing
Handling size (batch size) is 16, initial learning rate 0.005, momentum parameter 0.9, weight attenuation parameter (weight
It decay) is 0.0001.Model training is total to iteration 30 times, saves the optimal model of verification result for testing.Meanwhile in order to slow
The overfitting problem of model is solved, the application uses Random-Rotation, and the modes such as random overturning and random scale scaling expand training set.
Experimental analysis
Experimental data amounts to 1009 loxosis images in the application, according to model split training, verifying and the test of 8:1:1
Collection, wherein the image of the same patient point is in the same set.
Chest is used as using Dice Index=2*TP/ (2*TP+FP+FN) with comparison model performance, the application in order to assess
The evaluation index of flesh segmentation effect.Wherein, TP refers to that the chest muscle pixel quantity correctly classified, FP refer to non-chest muscle pixel by misclassification
For the quantity of chest muscle pixel, FN refers to that chest muscle pixel is mistakenly classified as the quantity of non-chest muscle pixel.Experimental result is shown in Table 1.
Table 1.U-Net and GCNet mammography X chest muscle split-run test result
|
Dice Index (%) |
U-Net |
70.27 |
GCNet (does not have pre-training) |
90.22 |
GCNet (pre-training) |
95.93 |
The GCNet of pre-training (while both keeping answering on an equal basis by comparison U-Net and on ImageNet database
Miscellaneous degree), it is seen that the GCNet for taking into account global context information that the application proposes has very chest muscle segmentation effect
Big promotion.Meanwhile the chest muscle segmentation Dice Index of the GCNet by pre-training has reached 95.93%.Clearly for chest
For flesh region and the apparent image of mammary region contrast, the model of the application has good chest muscle segmentation performance, even if
It is that the biggish chest muscle of some curvature can also be accomplished accurately to predict with mammary gland line of demarcation;And for pectoral region and mammary region pair
Image more unconspicuous than degree, the model of the application still are able to by making accurately in conjunction with global context information to pectoral region
Prediction.This demonstrate that the importance that global context information divides chest muscle, while also demonstrating the GCNet of the application proposition
The automatic chest muscle segmentation of mammography X can be competent at.
Obviously, those skilled in the art should be understood that each module of above-mentioned the application or each step can be with general
Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed
Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored
Be performed by computing device in the storage device, perhaps they are fabricated to each integrated circuit modules or by they
In multiple modules or step be fabricated to single integrated circuit module to realize.In this way, the application be not limited to it is any specific
Hardware and software combines.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field
For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair
Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.