CN109472313A - A method of it promoting deep learning and identifies B ultrasound picture steadiness - Google Patents

A method of it promoting deep learning and identifies B ultrasound picture steadiness Download PDF

Info

Publication number
CN109472313A
CN109472313A CN201811354227.0A CN201811354227A CN109472313A CN 109472313 A CN109472313 A CN 109472313A CN 201811354227 A CN201811354227 A CN 201811354227A CN 109472313 A CN109472313 A CN 109472313A
Authority
CN
China
Prior art keywords
image
deep learning
ultrasound
semantic segmentation
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811354227.0A
Other languages
Chinese (zh)
Inventor
林江莉
韩霖
陈科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201811354227.0A priority Critical patent/CN109472313A/en
Publication of CN109472313A publication Critical patent/CN109472313A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of methods of promotion deep learning identification B ultrasound picture steadiness, comprising the following steps: A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;B, the mapping of B ultrasound image outline~semantic segmentation label training: is trained using semantic segmentation model;C, it identifies: image to be identified being sent into model, different classes of cut zone is obtained, obtains the segmentation result in each region.In the present invention, image, semantic segmentation tag is generated using traditional tag along sort and objective contour, classified using depth discrimination network to target, the stability of classification results can be greatly promoted, use the method for the present invention, the stability that classification results can be greatly improved increases practical function of the deep learning in B ultrasound image recognition.

Description

A method of it promoting deep learning and identifies B ultrasound picture steadiness
Technical field
The invention belongs to technical field of image processing, and in particular to a kind of promotion deep learning identification B ultrasound picture steadiness Method.
Background technique
Deep learning has been achieved for very big progress in traditional field of image recognition, but it is super to be used in Type B In acoustic image identification, some difficulties are but encountered, one of them is exactly the unstable result predicted.Same width target image is cut The ROI (area-of-interest) taken, when being identified using depth convolutional neural networks, due to artificial selection ROI region position Nuance will lead to the great variety of result.Experiment shows the probability for having about 20%, as a result can be between different classes of Jump.This problem allows user to feel at a loss, to limit application of the deep learning in the identification of Type B ultrasound image.
Summary of the invention
Based on this, in view of the above-mentioned problems, the present invention proposes a kind of side of promotion deep learning identification B ultrasound picture steadiness Method.
The technical scheme is that a kind of method for promoting deep learning identification B ultrasound picture steadiness, including following step It is rapid:
A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;
B, training: the mapping of B ultrasound image outline~semantic segmentation label is trained using semantic segmentation model, is trained Semantic segmentation network deep learning model afterwards;
C, it identifies: image to be identified being sent into the semantic segmentation network deep learning model after training, obtains inhomogeneity Other cut zone obtains the segmentation result in each region.
Optionally, step A specifically includes following procedure:
A1, classify: obtaining the class categories information of object to be identified in every width B ultrasound image;
A2, it obtains profile: obtaining the profile of object to be identified in each image;
A3, it generates label: generating the target labels of training according to class categories information and profile, label is n-channel two-value Change image;Label can not include background channel: n is equal to classification number at this time, and the channel image of corresponding classification is object outline, The image in its channel is 0;Label also may include background channel: n is equal to classification number and adds 1 at this time, the channel image of corresponding classification For object outline, background channel image is the supplementary set of profile, and the image in other channels is 0.
Optionally, step B specifically includes following procedure:
B1, by B ultrasound image be sent into semantic segmentation network deep learning model, calculate multichannel segmentation result, and by its with Label compares, and calculates loss error;
B2, semantic segmentation network deep learning model is adjusted using back-propagation algorithm and stochastic gradient descent algorithm Weight, to reduce loss error;
B3, above-mentioned B1-B2 process is constantly repeated, until reaching stable result.
Optionally, step C specifically includes following procedure:
C1, B ultrasound image to be identified is sent into the semantic segmentation network deep learning model trained, obtains corresponding classification Segmentation result;
C2, the classification that object to be identified in B ultrasound image is calculated according to the segmentation result of each classification.
Optionally, the classification of object to be identified in B ultrasound image is calculated in step C2 according to the segmentation result of each classification, The calculation method of classification is any one in following three kinds of modes:
The first, the sum of more each channel pixel probability;
Second, more each aisle spare size;
The third, using certain aisle spare proportion threshold value as judgement result.
The beneficial effects of the present invention are: generating image language using traditional tag along sort and objective contour in the present invention Adopted segmentation tag classifies to target using depth discrimination network, can greatly promote the stability of classification results, uses this Inventive method can greatly improve the stability of classification results, increase practical function of the deep learning in B ultrasound image recognition.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is the example of generative semantics segmentation tag.
Fig. 3 is the structural schematic diagram of semantic segmentation network.
Fig. 4 is the signal for dividing B ultrasound tumour figure using semantic segmentation network.
Specific embodiment
The embodiment of the present invention is described in detail with reference to the accompanying drawing.
Embodiment 1
As shown in Figure 1, a kind of method for promoting deep learning identification B ultrasound picture steadiness, comprising the following steps:
A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;
B, training: the mapping of B ultrasound image outline~semantic segmentation label is trained using semantic segmentation model, is trained Semantic segmentation network deep learning model afterwards;
C, it identifies: image to be identified being sent into the semantic segmentation network deep learning model after training, obtains inhomogeneity Other cut zone obtains the segmentation result in each region.
Optionally, step A specifically includes following procedure:
A1, classify: obtaining the class categories information of object to be identified in every width B ultrasound image;
A2, it obtains profile: obtaining the profile of object to be identified in each image;
A3, it generates label: generating the target labels of training according to class categories information and profile, label is n-channel two-value Change image;Label can not include background channel: n is equal to classification number at this time, and the channel image of corresponding classification is object outline, The image in its channel is 0;Label also may include background channel: n is equal to classification number and adds 1 at this time, the channel image of corresponding classification For object outline, background channel image is the supplementary set of profile, and the image in other channels is 0.
Optionally, step B specifically includes following procedure:
B1, by B ultrasound image be sent into semantic segmentation network deep learning model, calculate multichannel segmentation result, and by its with Label compares, and calculates loss error;
B2, semantic segmentation network deep learning model is adjusted using back-propagation algorithm and stochastic gradient descent algorithm Weight, to reduce loss error;
B3, above-mentioned B1-B2 process is constantly repeated, until reaching stable result.
Optionally, step C specifically includes following procedure:
C1, B ultrasound image to be identified is sent into the semantic segmentation network deep learning model trained, obtains corresponding classification Segmentation result;
C2, the classification that object to be identified in B ultrasound image is calculated according to the segmentation result of each classification.
Optionally, the classification of object to be identified in B ultrasound image is calculated in step C2 according to the segmentation result of each classification, The calculation method of classification is any one in following three kinds of modes:
The first, the sum of more each channel pixel probability;
Second, more each aisle spare size;
The third, using certain aisle spare proportion threshold value as judgement result.
Specifically, it is now illustrated with the identification of B ultrasound tumor of breast image:
1, the good pernicious classification of every width tumour and the ROI of tumour and its profile, generative semantics point are demarcated on training set The label cut;It is the example of generative semantics segmentation tag as shown in Figure 2, in figure, the left side is the ROI, profile and its good evil of tumour Property classification, the right be generate 3 channel semantic segmentation labels.
2, using tumour ROI image as input, semantic segmentation label trains semantic segmentation network, structure as standard See Fig. 3;
3, testing image secondary for one is sent to semantic segmentation network, is divided after user sketches out tumour ROI Cut result.The good pernicious classification of last tumour is finally calculated, good pernicious calculation method can be each channel pixel probability of comparison The sum of, or more each aisle spare size or pernicious aisle spare are more than that certain proportion is then considered pernicious.As shown in figure 4, being Divide the signal of B ultrasound tumour figure using semantic segmentation network.
It should be noted that, although breast ultrasound tumour used above segmentation identification as an example, but this method not only It is only applicable to this.It can be applied to any human organ acquired using B ultrasound, the image category classification of any target.Wherein Semantic segmentation network arbitrary semantic segmentation network model also can be used.
A specific embodiment of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously Limitations on the scope of the patent of the present invention therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention Protect range.

Claims (5)

1. a kind of method for promoting deep learning identification B ultrasound picture steadiness, it is characterised in that: the following steps are included:
A, prepare training data: combining target B ultrasound image outline and its class label generate image, semantic segmentation tag;
B, training: the mapping of B ultrasound image outline~semantic segmentation label is trained using semantic segmentation model, after being trained Semantic segmentation network deep learning model;
C, it identifies: image to be identified being sent into the semantic segmentation network deep learning model after training, is obtained different classes of Cut zone obtains the segmentation result in each region.
2. the method according to claim 1 for promoting deep learning identification B ultrasound picture steadiness, it is characterised in that: step A Specifically include following procedure:
A1, classify: obtaining the class categories information of object to be identified in every width B ultrasound image;
A2, it obtains profile: obtaining the profile of object to be identified in each image;
A3, it generates label: generating the target labels of training according to class categories information and profile, label is n-channel binary picture Picture;Label can not include background channel: n is equal to classification number at this time, and the channel image of corresponding classification is object outline, Qi Tatong The image in road is 0;Label also may include background channel: n is equal to classification number and adds 1 at this time, and the channel image of corresponding classification is pair As profile, background channel image is the supplementary set of profile, and the image in other channels is 0.
3. the method according to claim 1 for promoting deep learning identification B ultrasound picture steadiness, it is characterised in that: step B Specifically include following procedure:
B1, B ultrasound image is sent into semantic segmentation network deep learning model, calculates multichannel segmentation result, and by itself and label Compare, calculates loss error;
B2, the power that semantic segmentation network deep learning model is adjusted using back-propagation algorithm and stochastic gradient descent algorithm Value, to reduce loss error;
B3, above-mentioned B1-B2 process is constantly repeated, until reaching stable result.
4. the method according to claim 3 for promoting deep learning identification B ultrasound picture steadiness, it is characterised in that: step C Specifically include following procedure:
C1, B ultrasound image to be identified is sent into the semantic segmentation network deep learning model trained, obtains point of corresponding classification Cut result;
C2, the classification that object to be identified in B ultrasound image is calculated according to the segmentation result of each classification.
5. the method according to claim 4 for promoting deep learning identification B ultrasound picture steadiness, which is characterized in that step Calculate the classification of object to be identified in B ultrasound image in C2 according to the segmentation result of each classification, the calculation method of classification be with Any one in lower three kinds of modes:
The first, the sum of more each channel pixel probability;
Second, more each aisle spare size;
The third, using certain aisle spare proportion threshold value as judgement result.
CN201811354227.0A 2018-11-14 2018-11-14 A method of it promoting deep learning and identifies B ultrasound picture steadiness Pending CN109472313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811354227.0A CN109472313A (en) 2018-11-14 2018-11-14 A method of it promoting deep learning and identifies B ultrasound picture steadiness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811354227.0A CN109472313A (en) 2018-11-14 2018-11-14 A method of it promoting deep learning and identifies B ultrasound picture steadiness

Publications (1)

Publication Number Publication Date
CN109472313A true CN109472313A (en) 2019-03-15

Family

ID=65672503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811354227.0A Pending CN109472313A (en) 2018-11-14 2018-11-14 A method of it promoting deep learning and identifies B ultrasound picture steadiness

Country Status (1)

Country Link
CN (1) CN109472313A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429451A (en) * 2020-04-15 2020-07-17 深圳市嘉骏实业有限公司 Medical ultrasonic image segmentation method and device
CN113888567A (en) * 2021-10-21 2022-01-04 中国科学院上海微系统与信息技术研究所 Training method of image segmentation model, image segmentation method and device
CN114067118A (en) * 2022-01-12 2022-02-18 湖北晓雲科技有限公司 Processing method of aerial photogrammetry data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563383A (en) * 2017-08-24 2018-01-09 杭州健培科技有限公司 A kind of medical image auxiliary diagnosis and semi-supervised sample generation system
CN108629777A (en) * 2018-04-19 2018-10-09 麦克奥迪(厦门)医疗诊断系统有限公司 A kind of number pathology full slice image lesion region automatic division method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563383A (en) * 2017-08-24 2018-01-09 杭州健培科技有限公司 A kind of medical image auxiliary diagnosis and semi-supervised sample generation system
CN108629777A (en) * 2018-04-19 2018-10-09 麦克奥迪(厦门)医疗诊断系统有限公司 A kind of number pathology full slice image lesion region automatic division method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429451A (en) * 2020-04-15 2020-07-17 深圳市嘉骏实业有限公司 Medical ultrasonic image segmentation method and device
CN111429451B (en) * 2020-04-15 2024-01-16 深圳市嘉骏实业有限公司 Medical ultrasonic image segmentation method and device
CN113888567A (en) * 2021-10-21 2022-01-04 中国科学院上海微系统与信息技术研究所 Training method of image segmentation model, image segmentation method and device
CN113888567B (en) * 2021-10-21 2024-05-14 中国科学院上海微系统与信息技术研究所 Training method of image segmentation model, image segmentation method and device
CN114067118A (en) * 2022-01-12 2022-02-18 湖北晓雲科技有限公司 Processing method of aerial photogrammetry data

Similar Documents

Publication Publication Date Title
JP6397986B2 (en) Image object region recognition method and apparatus
CN109145712B (en) Text information fused GIF short video emotion recognition method and system
CN106682696B (en) The more example detection networks and its training method refined based on online example classification device
US10986400B2 (en) Compact video representation for video event retrieval and recognition
Mittelman et al. Weakly supervised learning of mid-level features with Beta-Bernoulli process restricted Boltzmann machines
CN107506796A (en) A kind of alzheimer disease sorting technique based on depth forest
CN106991445A (en) A kind of ultrasonic contrast tumour automatic identification and detection method based on deep learning
CN109472313A (en) A method of it promoting deep learning and identifies B ultrasound picture steadiness
CN109409240A (en) A kind of SegNet remote sensing images semantic segmentation method of combination random walk
Cengil et al. Poisonous mushroom detection using YOLOV5
CN106709528A (en) Method and device of vehicle reidentification based on multiple objective function deep learning
CN104952073A (en) Shot boundary detecting method based on deep learning
CN110019779B (en) Text classification method, model training method and device
CN105117707A (en) Regional image-based facial expression recognition method
CN105446955A (en) Adaptive word segmentation method
CN104680193A (en) Online target classification method and system based on fast similarity network fusion algorithm
CN112927266B (en) Weak supervision time domain action positioning method and system based on uncertainty guide training
CN105893941B (en) A kind of facial expression recognizing method based on area image
CN106056627B (en) A kind of robust method for tracking target based on local distinctive rarefaction representation
Yao et al. Parkinson’s disease and cleft lip and palate of pathological speech diagnosis using deep convolutional neural networks evolved by IPWOA
Alhroob et al. Fuzzy min-max classifier based on new membership function for pattern classification: a conceptual solution
Singha et al. Recognition of global hand gestures using self co-articulation information and classifier fusion
Thatikonda et al. Diagnosis of Liver Tumor from CT Scan Images using Deep Segmentation Network with CMBOA based CNN
Palo et al. Classification of emotional speech of children using probabilistic neural network
CN116433679A (en) Inner ear labyrinth multi-level labeling pseudo tag generation and segmentation method based on spatial position structure priori

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190315