A method of it promoting deep learning and identifies B ultrasound picture steadiness
Technical field
The invention belongs to technical field of image processing, and in particular to a kind of promotion deep learning identification B ultrasound picture steadiness
Method.
Background technique
Deep learning has been achieved for very big progress in traditional field of image recognition, but it is super to be used in Type B
In acoustic image identification, some difficulties are but encountered, one of them is exactly the unstable result predicted.Same width target image is cut
The ROI (area-of-interest) taken, when being identified using depth convolutional neural networks, due to artificial selection ROI region position
Nuance will lead to the great variety of result.Experiment shows the probability for having about 20%, as a result can be between different classes of
Jump.This problem allows user to feel at a loss, to limit application of the deep learning in the identification of Type B ultrasound image.
Summary of the invention
Based on this, in view of the above-mentioned problems, the present invention proposes a kind of side of promotion deep learning identification B ultrasound picture steadiness
Method.
The technical scheme is that a kind of method for promoting deep learning identification B ultrasound picture steadiness, including following step
It is rapid:
A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;
B, training: the mapping of B ultrasound image outline~semantic segmentation label is trained using semantic segmentation model, is trained
Semantic segmentation network deep learning model afterwards;
C, it identifies: image to be identified being sent into the semantic segmentation network deep learning model after training, obtains inhomogeneity
Other cut zone obtains the segmentation result in each region.
Optionally, step A specifically includes following procedure:
A1, classify: obtaining the class categories information of object to be identified in every width B ultrasound image;
A2, it obtains profile: obtaining the profile of object to be identified in each image;
A3, it generates label: generating the target labels of training according to class categories information and profile, label is n-channel two-value
Change image;Label can not include background channel: n is equal to classification number at this time, and the channel image of corresponding classification is object outline,
The image in its channel is 0;Label also may include background channel: n is equal to classification number and adds 1 at this time, the channel image of corresponding classification
For object outline, background channel image is the supplementary set of profile, and the image in other channels is 0.
Optionally, step B specifically includes following procedure:
B1, by B ultrasound image be sent into semantic segmentation network deep learning model, calculate multichannel segmentation result, and by its with
Label compares, and calculates loss error;
B2, semantic segmentation network deep learning model is adjusted using back-propagation algorithm and stochastic gradient descent algorithm
Weight, to reduce loss error;
B3, above-mentioned B1-B2 process is constantly repeated, until reaching stable result.
Optionally, step C specifically includes following procedure:
C1, B ultrasound image to be identified is sent into the semantic segmentation network deep learning model trained, obtains corresponding classification
Segmentation result;
C2, the classification that object to be identified in B ultrasound image is calculated according to the segmentation result of each classification.
Optionally, the classification of object to be identified in B ultrasound image is calculated in step C2 according to the segmentation result of each classification,
The calculation method of classification is any one in following three kinds of modes:
The first, the sum of more each channel pixel probability;
Second, more each aisle spare size;
The third, using certain aisle spare proportion threshold value as judgement result.
The beneficial effects of the present invention are: generating image language using traditional tag along sort and objective contour in the present invention
Adopted segmentation tag classifies to target using depth discrimination network, can greatly promote the stability of classification results, uses this
Inventive method can greatly improve the stability of classification results, increase practical function of the deep learning in B ultrasound image recognition.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is the example of generative semantics segmentation tag.
Fig. 3 is the structural schematic diagram of semantic segmentation network.
Fig. 4 is the signal for dividing B ultrasound tumour figure using semantic segmentation network.
Specific embodiment
The embodiment of the present invention is described in detail with reference to the accompanying drawing.
Embodiment 1
As shown in Figure 1, a kind of method for promoting deep learning identification B ultrasound picture steadiness, comprising the following steps:
A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;
B, training: the mapping of B ultrasound image outline~semantic segmentation label is trained using semantic segmentation model, is trained
Semantic segmentation network deep learning model afterwards;
C, it identifies: image to be identified being sent into the semantic segmentation network deep learning model after training, obtains inhomogeneity
Other cut zone obtains the segmentation result in each region.
Optionally, step A specifically includes following procedure:
A1, classify: obtaining the class categories information of object to be identified in every width B ultrasound image;
A2, it obtains profile: obtaining the profile of object to be identified in each image;
A3, it generates label: generating the target labels of training according to class categories information and profile, label is n-channel two-value
Change image;Label can not include background channel: n is equal to classification number at this time, and the channel image of corresponding classification is object outline,
The image in its channel is 0;Label also may include background channel: n is equal to classification number and adds 1 at this time, the channel image of corresponding classification
For object outline, background channel image is the supplementary set of profile, and the image in other channels is 0.
Optionally, step B specifically includes following procedure:
B1, by B ultrasound image be sent into semantic segmentation network deep learning model, calculate multichannel segmentation result, and by its with
Label compares, and calculates loss error;
B2, semantic segmentation network deep learning model is adjusted using back-propagation algorithm and stochastic gradient descent algorithm
Weight, to reduce loss error;
B3, above-mentioned B1-B2 process is constantly repeated, until reaching stable result.
Optionally, step C specifically includes following procedure:
C1, B ultrasound image to be identified is sent into the semantic segmentation network deep learning model trained, obtains corresponding classification
Segmentation result;
C2, the classification that object to be identified in B ultrasound image is calculated according to the segmentation result of each classification.
Optionally, the classification of object to be identified in B ultrasound image is calculated in step C2 according to the segmentation result of each classification,
The calculation method of classification is any one in following three kinds of modes:
The first, the sum of more each channel pixel probability;
Second, more each aisle spare size;
The third, using certain aisle spare proportion threshold value as judgement result.
Specifically, it is now illustrated with the identification of B ultrasound tumor of breast image:
1, the good pernicious classification of every width tumour and the ROI of tumour and its profile, generative semantics point are demarcated on training set
The label cut;It is the example of generative semantics segmentation tag as shown in Figure 2, in figure, the left side is the ROI, profile and its good evil of tumour
Property classification, the right be generate 3 channel semantic segmentation labels.
2, using tumour ROI image as input, semantic segmentation label trains semantic segmentation network, structure as standard
See Fig. 3;
3, testing image secondary for one is sent to semantic segmentation network, is divided after user sketches out tumour ROI
Cut result.The good pernicious classification of last tumour is finally calculated, good pernicious calculation method can be each channel pixel probability of comparison
The sum of, or more each aisle spare size or pernicious aisle spare are more than that certain proportion is then considered pernicious.As shown in figure 4, being
Divide the signal of B ultrasound tumour figure using semantic segmentation network.
It should be noted that, although breast ultrasound tumour used above segmentation identification as an example, but this method not only
It is only applicable to this.It can be applied to any human organ acquired using B ultrasound, the image category classification of any target.Wherein
Semantic segmentation network arbitrary semantic segmentation network model also can be used.
A specific embodiment of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
Limitations on the scope of the patent of the present invention therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention
Protect range.