CN109472313A - A method to improve the stability of deep learning to identify B-ultrasound images - Google Patents

A method to improve the stability of deep learning to identify B-ultrasound images Download PDF

Info

Publication number
CN109472313A
CN109472313A CN201811354227.0A CN201811354227A CN109472313A CN 109472313 A CN109472313 A CN 109472313A CN 201811354227 A CN201811354227 A CN 201811354227A CN 109472313 A CN109472313 A CN 109472313A
Authority
CN
China
Prior art keywords
image
deep learning
classification
ultrasound
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811354227.0A
Other languages
Chinese (zh)
Inventor
林江莉
韩霖
陈科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201811354227.0A priority Critical patent/CN109472313A/en
Publication of CN109472313A publication Critical patent/CN109472313A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种提升深度学习识别B超图像稳定性的方法,包括以下步骤:A、准备训练数据:结合目标B超图像轮廓和其类别标签生成图像语义分割标签;B、训练:使用语义分割模型来训练B超图像轮廓~语义分割标签的映射;C、识别:将待识别的图像送入模型,得到不同类别的分割区域,获得各区域的分割结果。本发明中,使用传统的分类标签和目标轮廓来生成图像语义分割标签,使用深度判别网络来对目标进行分类,能大大提升分类结果的稳定性,使用本发明方法,能大大提高分类结果的稳定性,增加了深度学习在B超图像识别上的实用效果。

The invention discloses a method for improving the stability of deep learning to identify B-ultrasound images, comprising the following steps: A. Prepare training data: generate image semantic segmentation labels in combination with target B-ultrasound image contours and its category labels; B. Training: use semantics The segmentation model is used to train the mapping from the contour of the ultrasound image to the semantic segmentation label; C. Recognition: The image to be recognized is sent to the model to obtain segmentation regions of different categories, and the segmentation results of each region are obtained. In the present invention, the traditional classification labels and target contours are used to generate image semantic segmentation labels, and the deep discriminant network is used to classify the targets, which can greatly improve the stability of the classification results, and the method of the present invention can greatly improve the stability of the classification results. It increases the practical effect of deep learning in B-ultrasound image recognition.

Description

A method of it promoting deep learning and identifies B ultrasound picture steadiness
Technical field
The invention belongs to technical field of image processing, and in particular to a kind of promotion deep learning identification B ultrasound picture steadiness Method.
Background technique
Deep learning has been achieved for very big progress in traditional field of image recognition, but it is super to be used in Type B In acoustic image identification, some difficulties are but encountered, one of them is exactly the unstable result predicted.Same width target image is cut The ROI (area-of-interest) taken, when being identified using depth convolutional neural networks, due to artificial selection ROI region position Nuance will lead to the great variety of result.Experiment shows the probability for having about 20%, as a result can be between different classes of Jump.This problem allows user to feel at a loss, to limit application of the deep learning in the identification of Type B ultrasound image.
Summary of the invention
Based on this, in view of the above-mentioned problems, the present invention proposes a kind of side of promotion deep learning identification B ultrasound picture steadiness Method.
The technical scheme is that a kind of method for promoting deep learning identification B ultrasound picture steadiness, including following step It is rapid:
A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;
B, training: the mapping of B ultrasound image outline~semantic segmentation label is trained using semantic segmentation model, is trained Semantic segmentation network deep learning model afterwards;
C, it identifies: image to be identified being sent into the semantic segmentation network deep learning model after training, obtains inhomogeneity Other cut zone obtains the segmentation result in each region.
Optionally, step A specifically includes following procedure:
A1, classify: obtaining the class categories information of object to be identified in every width B ultrasound image;
A2, it obtains profile: obtaining the profile of object to be identified in each image;
A3, it generates label: generating the target labels of training according to class categories information and profile, label is n-channel two-value Change image;Label can not include background channel: n is equal to classification number at this time, and the channel image of corresponding classification is object outline, The image in its channel is 0;Label also may include background channel: n is equal to classification number and adds 1 at this time, the channel image of corresponding classification For object outline, background channel image is the supplementary set of profile, and the image in other channels is 0.
Optionally, step B specifically includes following procedure:
B1, by B ultrasound image be sent into semantic segmentation network deep learning model, calculate multichannel segmentation result, and by its with Label compares, and calculates loss error;
B2, semantic segmentation network deep learning model is adjusted using back-propagation algorithm and stochastic gradient descent algorithm Weight, to reduce loss error;
B3, above-mentioned B1-B2 process is constantly repeated, until reaching stable result.
Optionally, step C specifically includes following procedure:
C1, B ultrasound image to be identified is sent into the semantic segmentation network deep learning model trained, obtains corresponding classification Segmentation result;
C2, the classification that object to be identified in B ultrasound image is calculated according to the segmentation result of each classification.
Optionally, the classification of object to be identified in B ultrasound image is calculated in step C2 according to the segmentation result of each classification, The calculation method of classification is any one in following three kinds of modes:
The first, the sum of more each channel pixel probability;
Second, more each aisle spare size;
The third, using certain aisle spare proportion threshold value as judgement result.
The beneficial effects of the present invention are: generating image language using traditional tag along sort and objective contour in the present invention Adopted segmentation tag classifies to target using depth discrimination network, can greatly promote the stability of classification results, uses this Inventive method can greatly improve the stability of classification results, increase practical function of the deep learning in B ultrasound image recognition.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is the example of generative semantics segmentation tag.
Fig. 3 is the structural schematic diagram of semantic segmentation network.
Fig. 4 is the signal for dividing B ultrasound tumour figure using semantic segmentation network.
Specific embodiment
The embodiment of the present invention is described in detail with reference to the accompanying drawing.
Embodiment 1
As shown in Figure 1, a kind of method for promoting deep learning identification B ultrasound picture steadiness, comprising the following steps:
A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;
B, training: the mapping of B ultrasound image outline~semantic segmentation label is trained using semantic segmentation model, is trained Semantic segmentation network deep learning model afterwards;
C, it identifies: image to be identified being sent into the semantic segmentation network deep learning model after training, obtains inhomogeneity Other cut zone obtains the segmentation result in each region.
Optionally, step A specifically includes following procedure:
A1, classify: obtaining the class categories information of object to be identified in every width B ultrasound image;
A2, it obtains profile: obtaining the profile of object to be identified in each image;
A3, it generates label: generating the target labels of training according to class categories information and profile, label is n-channel two-value Change image;Label can not include background channel: n is equal to classification number at this time, and the channel image of corresponding classification is object outline, The image in its channel is 0;Label also may include background channel: n is equal to classification number and adds 1 at this time, the channel image of corresponding classification For object outline, background channel image is the supplementary set of profile, and the image in other channels is 0.
Optionally, step B specifically includes following procedure:
B1, by B ultrasound image be sent into semantic segmentation network deep learning model, calculate multichannel segmentation result, and by its with Label compares, and calculates loss error;
B2, semantic segmentation network deep learning model is adjusted using back-propagation algorithm and stochastic gradient descent algorithm Weight, to reduce loss error;
B3, above-mentioned B1-B2 process is constantly repeated, until reaching stable result.
Optionally, step C specifically includes following procedure:
C1, B ultrasound image to be identified is sent into the semantic segmentation network deep learning model trained, obtains corresponding classification Segmentation result;
C2, the classification that object to be identified in B ultrasound image is calculated according to the segmentation result of each classification.
Optionally, the classification of object to be identified in B ultrasound image is calculated in step C2 according to the segmentation result of each classification, The calculation method of classification is any one in following three kinds of modes:
The first, the sum of more each channel pixel probability;
Second, more each aisle spare size;
The third, using certain aisle spare proportion threshold value as judgement result.
Specifically, it is now illustrated with the identification of B ultrasound tumor of breast image:
1, the good pernicious classification of every width tumour and the ROI of tumour and its profile, generative semantics point are demarcated on training set The label cut;It is the example of generative semantics segmentation tag as shown in Figure 2, in figure, the left side is the ROI, profile and its good evil of tumour Property classification, the right be generate 3 channel semantic segmentation labels.
2, using tumour ROI image as input, semantic segmentation label trains semantic segmentation network, structure as standard See Fig. 3;
3, testing image secondary for one is sent to semantic segmentation network, is divided after user sketches out tumour ROI Cut result.The good pernicious classification of last tumour is finally calculated, good pernicious calculation method can be each channel pixel probability of comparison The sum of, or more each aisle spare size or pernicious aisle spare are more than that certain proportion is then considered pernicious.As shown in figure 4, being Divide the signal of B ultrasound tumour figure using semantic segmentation network.
It should be noted that, although breast ultrasound tumour used above segmentation identification as an example, but this method not only It is only applicable to this.It can be applied to any human organ acquired using B ultrasound, the image category classification of any target.Wherein Semantic segmentation network arbitrary semantic segmentation network model also can be used.
A specific embodiment of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously Limitations on the scope of the patent of the present invention therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention Protect range.

Claims (5)

1. a kind of method for promoting deep learning identification B ultrasound picture steadiness, it is characterised in that: the following steps are included:
A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;
B, training: the mapping of B ultrasound image outline~semantic segmentation label is trained using semantic segmentation model, after being trained Semantic segmentation network deep learning model;
C, it identifies: image to be identified being sent into the semantic segmentation network deep learning model after training, is obtained different classes of Cut zone obtains the segmentation result in each region.
2. the method according to claim 1 for promoting deep learning identification B ultrasound picture steadiness, it is characterised in that: step A Specifically include following procedure:
A1, classify: obtaining the class categories information of object to be identified in every width B ultrasound image;
A2, it obtains profile: obtaining the profile of object to be identified in each image;
A3, it generates label: generating the target labels of training according to class categories information and profile, label is n-channel binary picture Picture;Label can not include background channel: n is equal to classification number at this time, and the channel image of corresponding classification is object outline, Qi Tatong The image in road is 0;Label also may include background channel: n is equal to classification number and adds 1 at this time, and the channel image of corresponding classification is pair As profile, background channel image is the supplementary set of profile, and the image in other channels is 0.
3. the method according to claim 1 for promoting deep learning identification B ultrasound picture steadiness, it is characterised in that: step B Specifically include following procedure:
B1, B ultrasound image is sent into semantic segmentation network deep learning model, calculates multichannel segmentation result, and by itself and label Compare, calculates loss error;
B2, the power that semantic segmentation network deep learning model is adjusted using back-propagation algorithm and stochastic gradient descent algorithm Value, to reduce loss error;
B3, above-mentioned B1-B2 process is constantly repeated, until reaching stable result.
4. the method according to claim 3 for promoting deep learning identification B ultrasound picture steadiness, it is characterised in that: step C Specifically include following procedure:
C1, B ultrasound image to be identified is sent into the semantic segmentation network deep learning model trained, obtains point of corresponding classification Cut result;
C2, the classification that object to be identified in B ultrasound image is calculated according to the segmentation result of each classification.
5. the method according to claim 4 for promoting deep learning identification B ultrasound picture steadiness, which is characterized in that step Calculate the classification of object to be identified in B ultrasound image in C2 according to the segmentation result of each classification, the calculation method of classification be with Any one in lower three kinds of modes:
The first, the sum of more each channel pixel probability;
Second, more each aisle spare size;
The third, using certain aisle spare proportion threshold value as judgement result.
CN201811354227.0A 2018-11-14 2018-11-14 A method to improve the stability of deep learning to identify B-ultrasound images Pending CN109472313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811354227.0A CN109472313A (en) 2018-11-14 2018-11-14 A method to improve the stability of deep learning to identify B-ultrasound images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811354227.0A CN109472313A (en) 2018-11-14 2018-11-14 A method to improve the stability of deep learning to identify B-ultrasound images

Publications (1)

Publication Number Publication Date
CN109472313A true CN109472313A (en) 2019-03-15

Family

ID=65672503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811354227.0A Pending CN109472313A (en) 2018-11-14 2018-11-14 A method to improve the stability of deep learning to identify B-ultrasound images

Country Status (1)

Country Link
CN (1) CN109472313A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429451A (en) * 2020-04-15 2020-07-17 深圳市嘉骏实业有限公司 Medical ultrasonic image segmentation method and device
CN113888567A (en) * 2021-10-21 2022-01-04 中国科学院上海微系统与信息技术研究所 An image segmentation model training method, image segmentation method and device
CN114067118A (en) * 2022-01-12 2022-02-18 湖北晓雲科技有限公司 Processing method of aerial photogrammetry data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563383A (en) * 2017-08-24 2018-01-09 杭州健培科技有限公司 A kind of medical image auxiliary diagnosis and semi-supervised sample generation system
CN108629777A (en) * 2018-04-19 2018-10-09 麦克奥迪(厦门)医疗诊断系统有限公司 A kind of number pathology full slice image lesion region automatic division method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563383A (en) * 2017-08-24 2018-01-09 杭州健培科技有限公司 A kind of medical image auxiliary diagnosis and semi-supervised sample generation system
CN108629777A (en) * 2018-04-19 2018-10-09 麦克奥迪(厦门)医疗诊断系统有限公司 A kind of number pathology full slice image lesion region automatic division method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429451A (en) * 2020-04-15 2020-07-17 深圳市嘉骏实业有限公司 Medical ultrasonic image segmentation method and device
CN111429451B (en) * 2020-04-15 2024-01-16 深圳市嘉骏实业有限公司 Medical ultrasonic image segmentation method and device
CN113888567A (en) * 2021-10-21 2022-01-04 中国科学院上海微系统与信息技术研究所 An image segmentation model training method, image segmentation method and device
CN113888567B (en) * 2021-10-21 2024-05-14 中国科学院上海微系统与信息技术研究所 Training method of image segmentation model, image segmentation method and device
CN114067118A (en) * 2022-01-12 2022-02-18 湖北晓雲科技有限公司 Processing method of aerial photogrammetry data

Similar Documents

Publication Publication Date Title
Abousamra et al. Localization in the crowd with topological constraints
CN107330889B (en) An automatic analysis method of tongue color and fur color in traditional Chinese medicine based on convolutional neural network
CN110853051B (en) Cerebrovascular image segmentation method based on multi-attention dense connection generation countermeasure network
Kae et al. Augmenting CRFs with Boltzmann machine shape priors for image labeling
CN108109160A (en) It is a kind of that interactive GrabCut tongue bodies dividing method is exempted from based on deep learning
CN110929617B (en) Face-changing synthesized video detection method and device, electronic equipment and storage medium
CN105205475A (en) Dynamic gesture recognition method
US10986400B2 (en) Compact video representation for video event retrieval and recognition
CN103985381B (en) A kind of audio indexing method based on Parameter fusion Optimal Decision-making
JP2017520859A (en) Image object region recognition method and apparatus
CN103824090B (en) Adaptive face low-level feature selection method and face attribute recognition method
CN106096627A (en) The Polarimetric SAR Image semisupervised classification method that considering feature optimizes
CN105373777A (en) Face recognition method and device
Mittelman et al. Weakly supervised learning of mid-level features with Beta-Bernoulli process restricted Boltzmann machines
CN109472313A (en) A method to improve the stability of deep learning to identify B-ultrasound images
CN106407958A (en) Double-layer-cascade-based facial feature detection method
CN111274955A (en) Emotion recognition method and system based on audio-visual feature correlation fusion
CN105760472A (en) Video retrieval method and system
CN104636761A (en) Image semantic annotation method based on hierarchical segmentation
CN105117707A (en) Regional image-based facial expression recognition method
CN113643302A (en) Unsupervised medical image segmentation method and system based on active contour model
Yao et al. Parkinson’s disease and cleft lip and palate of pathological speech diagnosis using deep convolutional neural networks evolved by IPWOA
CN106169084A (en) A kind of SVM mammary gland sorting technique based on Gauss kernel parameter selection
Thatikonda et al. Diagnosis of Liver Tumor from CT Scan Images using Deep Segmentation Network with CMBOA based CNN
CN108765431B (en) Image segmentation method and application thereof in medical field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190315

RJ01 Rejection of invention patent application after publication