CN109965905B - Contrast region detection imaging method based on deep learning - Google Patents

Contrast region detection imaging method based on deep learning Download PDF

Info

Publication number
CN109965905B
CN109965905B CN201910289375.7A CN201910289375A CN109965905B CN 109965905 B CN109965905 B CN 109965905B CN 201910289375 A CN201910289375 A CN 201910289375A CN 109965905 B CN109965905 B CN 109965905B
Authority
CN
China
Prior art keywords
imaging
radio frequency
signal
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910289375.7A
Other languages
Chinese (zh)
Other versions
CN109965905A (en
Inventor
余锦华
汪源源
邓寅晖
童宇宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201910289375.7A priority Critical patent/CN109965905B/en
Publication of CN109965905A publication Critical patent/CN109965905A/en
Application granted granted Critical
Publication of CN109965905B publication Critical patent/CN109965905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5269Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts

Abstract

The invention provides a contrast region detection imaging method based on deep learning. S1, randomly selecting a plurality of original ultrasonic RF images, and segmenting signals on each scanning line of the original ultrasonic RF images to obtain a plurality of one-dimensional RF signal segments with equal length; s2, training and testing the convolutional neural network by using the RF signal segment; s3, dividing the RF signal of each scanning line of the actual measurement object into one-dimensional actual measurement RF signal segments with equal length, and inputting the signal segments into the trained convolutional neural network; s4, setting P imaging points in the middle of a one-dimensional measured RF signal section with a tissue signal as a label as 0; s5, recombining the one-dimensional actually measured RF signal section to obtain pre-imaging; s6, improving the brightness of the microbubble image in the pre-imaging by adopting mother microbubble wavelet transformation; and S7, improving the image contrast by adopting a feature space minimum variance algorithm. The invention applies the concept of deep learning to the classification of the ultrasonic RF signals, can more effectively filter out tissue interference, and further improves the accuracy of medical clinical diagnosis.

Description

Contrast region detection imaging method based on deep learning
Technical Field
The invention relates to the field of image segmentation, in particular to a contrast region detection imaging method based on deep learning.
Background
The ultrasonic contrast agent is composed of a large number of micro-bubbles, and can enhance ultrasonic back scattering signals and enable echo signals to generate rich harmonic components. The ultrasonic contrast agent is injected into a human body, which is beneficial to improving the quality of an ultrasonic image and endowing the ultrasonic diagnosis with the capability of identifying tiny lesions.
In recent years, a series of new contrast imaging methods (harmonic imaging, double-pulse transmit imaging, coded pulse imaging, microbubble wavelet technology) have appeared, and the methods are based on a principle: while the harmonic components of the microbubbles are sufficiently extracted, the fundamental components from the tissue are filtered out to improve contrast of the contrast image.
However, in the prior art, since the extraction of the microbubble signal component and the filtering of the tissue signal component are simultaneously completed, the method needs to take both the extraction and the filtering into consideration, and therefore, the pertinence to each is not strong, the filtering effect on the tissue signal is poor when the tissue signal is too strong, and the quality of finally output contrast imaging is affected.
Disclosure of Invention
The invention aims to provide a contrast region detection imaging method based on deep learning, which applies the concept of deep learning to the classification of ultrasonic RF (Radio Frequency) signals, preliminarily distinguishes microbubble signals and tissue signals in an original ultrasonic RF image of contrast imaging through a convolutional neural network, and obtains a pre-image of contrast; then, the imaging quality of the microbubble signals is further improved aiming at pre-imaging through a microbubble mother wavelet imaging method and a beam forming algorithm.
In order to achieve the above object, the present invention provides a contrast region detection imaging method based on deep learning, in which an ultrasound RF image for contrast imaging is composed of RF signals on a plurality of scan lines from left to right, the contrast region detection imaging method comprising the steps of:
s1, randomly selecting a plurality of original ultrasonic RF images for radiography imaging, dividing the images into two groups, and establishing an experimental data set; sequentially segmenting the RF signal on each scanning line of the original ultrasonic RF image from top to bottom to obtain a plurality of one-dimensional RF signal segments with equal length; the RF signal section comprises n continuous imaging points, adjacent RF signal sections are spaced by m continuous imaging points, and one imaging point corresponds to one RF signal;
s2, establishing a classification tag set Y ═ tissue signal, microbubble signal }; inputting RF signal segments extracted from the first group of original ultrasonic RF images into a convolutional neural network to obtain a trained convolutional neural network; inputting RF signal segments of a second set of original ultrasonic RF images into the trained convolutional neural network, and testing the trained convolutional neural network;
s3, carrying out radiography on the actual measurement object, and segmenting the RF signal on each scanning line of the actual measurement object in sequence to obtain a plurality of one-dimensional actual measurement RF signal segments with equal length; each actually measured RF signal section comprises n continuous imaging points, and the adjacent actually measured RF signal sections are spaced by m continuous imaging points;
s4, inputting the actually measured RF signal section into the trained convolutional neural network, and when the classification label of the actually measured RF signal section is a tissue signal, zeroing P imaging points in the middle of the actually measured RF signal section;
s5, recombining the actually measured RF signal segments according to the scanning lines to obtain two-dimensional pre-imaging of the actually measured object;
s6, improving the brightness of a microbubble imaging point in the pre-imaging by adopting a microbubble mother wavelet imaging method;
and S7, improving the contrast of the image obtained in the step S6 by adopting a beam forming algorithm, and obtaining a two-dimensional ultrasonic image of the actually measured object.
In steps S1 and S3, m is 5, and n is 60.
In step S4, the zeroing of the P imaging points in the middle of the measured RF signal segment specifically means setting the RF signal values of the 29 th to 33 th imaging points of the measured signal segment to 0, where P is 5.
Recombining the actually measured RF signal segments according to the scan lines in the step S5, specifically, sampling the actually measured RF signal segments on each scan line every 5 imaging points after the step S4; forming a two-dimensional pre-imaging of the measured object through all sampling points; in the pre-imaging, each sampling point is on the original scanning line.
The convolutional neural network is a U-net convolutional neural network.
The U-net convolutional neural network takes cross entropy as a cost function, takes a ReLU function as a nonlinear activation function and takes an Adam algorithm as an optimization algorithm.
The first set of original ultrasound RF images is four times as many as the second set of original ultrasound RF images.
Compared with the prior art, the invention has the advantages that: before the ultrasonic RF signals are processed and scanned to obtain the ultrasonic RF images in the prior art, a deep learning method is applied to classify the ultrasonic RF one-dimensional signals, most tissue signals are primarily screened out, and pre-imaging of the ultrasonic RF images is obtained. And then acquiring a final ultrasonic RF image for the pre-imaging by using a microbubble mother wavelet imaging method and a characteristic space minimum variance algorithm. The invention can effectively filter the interference of tissue signals and further improve the accuracy of medical clinical diagnosis.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings used in the description will be briefly introduced, and it is obvious that the drawings in the following description are an embodiment of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts according to the drawings:
FIG. 1 is a flowchart of a method for imaging a contrast region based on deep learning according to the present invention;
FIG. 2 is a schematic diagram of the RF signal segmentation on each scan line of the actual measured object in step S3 according to the present invention;
FIG. 3 is a schematic diagram of the 29 th to 33 th imaging points of the zero measurement signal segment in step S4 according to the present invention;
FIG. 4 is a schematic diagram illustrating the sampling result of the actually measured RF signal segment in step S5 according to the present invention;
fig. 5 is a schematic diagram of reconstructing all the sampling points to form a two-dimensional pre-image of the measured object in step S5 according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a contrast region detection imaging method based on deep learning. As shown in fig. 1, the contrast region detection imaging method includes the steps of:
and S1, randomly selecting a plurality of original ultrasonic RF images for contrast imaging, dividing the original ultrasonic RF images into two groups, and establishing an experimental data set, wherein the original ultrasonic RF images of the first group are four times as large as the original ultrasonic RF images of the second group. And sequentially segmenting the RF signals on each scanning line of the original ultrasonic RF image from top to bottom to obtain a plurality of one-dimensional RF signal segments with equal length. The RF signal segment comprises 60 continuous imaging points, adjacent RF signal segments are separated by 5 continuous imaging points, and one imaging point corresponds to one RF signal.
S2, establishing a classification tag set Y ═ tissue signal, microbubble signal }. Inputting RF signal segments extracted from a first group of original ultrasonic RF images into a U-net convolutional neural network to obtain a trained convolutional neural network, inputting RF signal segments of a second group of original ultrasonic RF images into the trained convolutional neural network, and testing the trained convolutional neural network. The U-net convolutional neural network takes cross entropy as a cost function, takes a ReLU (Rectified Linear Unit) function as a nonlinear activation function, and takes an Adam algorithm as an optimization algorithm.
S3, imaging the measured object, and segmenting the RF signal on each scanning line of the measured object in sequence to obtain a plurality of one-dimensional measured RF signal segments with equal length. Each measured RF signal segment contains 60 consecutive imaging points, with adjacent measured RF signal segments separated by 5 consecutive imaging points.
FIG. 2 is a schematic diagram of the RF signal segmentation for each scan line of the measured object; fig. 2 includes k scan lines, and the RF signals of the k scan lines are used to image the measured object. Dividing each scan line into M measured RF signal segments, RF ijRepresents the j measured RF signal segment on the ith scanning line, i ∈ [1, k ]],j∈[1,M],RF ijContaining 60 imaging points. RF (radio frequency) ijAnd RF i(j+1)Spaced by 5 imaging points (j is larger than [1, M-1 ]]),RF ijAnd RF i(j-1)Spaced by 5 imaging points (j epsilon [2, M)]). In step S1, the RF signals on each scan line of the original ultrasound RF image are sequentially segmented from top to bottom in the same manner as shown in fig. 2.
And S4, inputting the actually measured RF signal segment into the trained convolutional neural network, and setting the RF signal values of the 29 th to 33 th imaging points of the actually measured signal segment as 0 when the classification label of the actually measured RF signal segment is a tissue signal.
As shown in figure 3 of the drawings,
Figure BDA0002024410260000041
for measured RF signal segments RF ij60 imaging points. When the measured RF signal segment RF is ijWhen the classification label is a tissue signal, the classification label is
Figure BDA0002024410260000042
The RF signal values for a total of 5 imaging points are set to 0.
S5, recombining the actually measured RF signal segments according to the scanning lines, specifically, sampling the actually measured RF signal segments on each scanning line every 5 imaging points after the step S4; as shown in fig. 4, for the current measured RF signal segment RF ijObtaining RF after sampling ij′,RF ij' including RF ijIs/are as follows
Figure BDA0002024410260000043
Figure BDA0002024410260000044
There are 12 imaging points.
And forming a two-dimensional pre-imaging of the actually measured object through all sampling points, wherein each sampling point is on the original scanning line in the pre-imaging. As shown in fig. 5, by RF 11′~RF kM' two-dimensional Pre-imaging of a component measured object, and RF 11′~RF kM' still on the respective scan line.
S6, improving the brightness of a microbubble imaging point in the pre-imaging by adopting a microbubble mother wavelet imaging method;
and S7, improving the contrast of the image obtained in the step S6 by adopting a beam forming algorithm, and obtaining a two-dimensional ultrasonic image of the actually measured object.
Compared with the prior art, the invention has the advantages that: before the ultrasonic RF signals are processed and scanned to obtain the ultrasonic RF images in the prior art, a deep learning method is applied to classify the ultrasonic RF one-dimensional signals, most tissue signals are primarily screened out, and pre-imaging of the ultrasonic RF images is obtained. And then acquiring a final ultrasonic RF image for the pre-imaging by using a microbubble mother wavelet imaging method and a characteristic space minimum variance algorithm. The invention can effectively filter the interference of tissue signals and further improve the accuracy of medical clinical diagnosis.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A contrast region detection imaging method based on deep learning is disclosed, wherein an ultrasonic radio frequency image for contrast imaging is composed of radio frequency signals on a plurality of scanning lines from left to right, and the contrast region detection imaging method comprises the following steps:
s1, randomly selecting a plurality of original ultrasonic radio-frequency images for radiography imaging, dividing the images into two groups, and establishing an experimental data set; sequentially segmenting the radio frequency signals on each scanning line of the original ultrasonic radio frequency image from top to bottom to obtain a plurality of one-dimensional radio frequency signal segments with equal length; the radio frequency signal section comprises n continuous imaging points, adjacent radio frequency signal sections are spaced by m continuous imaging points, and one imaging point corresponds to one radio frequency signal;
s2, establishing a classification tag set Y ═ tissue signal, microbubble signal }; inputting radio frequency signal segments extracted from the first group of original ultrasonic radio frequency images into a convolutional neural network to obtain a trained convolutional neural network; inputting radio frequency signal segments of a second group of original ultrasonic radio frequency images into the trained convolutional neural network, and testing the trained convolutional neural network;
s3, carrying out radiography on the actual measurement object, and segmenting the radio frequency signal on each scanning line of the actual measurement object in sequence to obtain a plurality of one-dimensional actual measurement radio frequency signal segments with equal length; each measured radio frequency signal section comprises n continuous imaging points, and adjacent measured radio frequency signal sections are spaced by m continuous imaging points;
s4, inputting the actually measured radio frequency signal section into the trained convolutional neural network, and when the classification label of the actually measured radio frequency signal section is a tissue signal, zeroing P imaging points in the middle of the actually measured radio frequency signal section;
s5, recombining the actually measured radio frequency signal section according to the scanning line to obtain a two-dimensional pre-imaging of the actually measured object;
s6, improving the brightness of a microbubble imaging point in the pre-imaging by adopting a microbubble mother wavelet imaging method;
and S7, improving the contrast of the image obtained in the step S6 by adopting a beam forming algorithm, and obtaining a two-dimensional ultrasonic image of the actually measured object.
2. The deep learning-based contrast region detection imaging method according to claim 1, wherein m-5 and n-60 are performed in steps S1 and S3.
3. The method as claimed in claim 1, wherein the step S4 is performed by nulling P imaging points in the middle of the measured rf signal segment, specifically setting the rf signal values of the 29 th to 33 th imaging points of the measured signal segment to 0, where P is 5.
4. The method as claimed in claim 1, wherein the step S5 is to reconstruct the measured rf signal segments by scan line, specifically to sample the measured rf signal segments on each scan line every 5 imaging points after the step S4; forming a two-dimensional pre-imaging of the measured object through all sampling points; in the pre-imaging, each sampling point is on the original scanning line.
5. The deep learning based contrast region detection imaging method according to claim 1, wherein the convolutional neural network is a U-net convolutional neural network.
6. The deep learning-based contrast region detection imaging method according to claim 5, wherein the U-net convolutional neural network uses cross entropy as a cost function, ReLU function as a nonlinear activation function, and Adam algorithm as an optimization algorithm.
7. The deep learning based contrast region detection imaging method of claim 1, wherein the beamforming algorithm is a feature space minimum variance algorithm.
8. The method of deep learning based contrast region detection imaging as claimed in claim 1, wherein the first set of original ultrasound radio frequency images is four times as many as the second set of original ultrasound radio frequency images.
CN201910289375.7A 2019-04-11 2019-04-11 Contrast region detection imaging method based on deep learning Active CN109965905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910289375.7A CN109965905B (en) 2019-04-11 2019-04-11 Contrast region detection imaging method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910289375.7A CN109965905B (en) 2019-04-11 2019-04-11 Contrast region detection imaging method based on deep learning

Publications (2)

Publication Number Publication Date
CN109965905A CN109965905A (en) 2019-07-05
CN109965905B true CN109965905B (en) 2020-02-11

Family

ID=67084140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910289375.7A Active CN109965905B (en) 2019-04-11 2019-04-11 Contrast region detection imaging method based on deep learning

Country Status (1)

Country Link
CN (1) CN109965905B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110554393B (en) * 2019-07-12 2023-05-23 华南理工大学 High-contrast minimum variance imaging method based on deep learning
CN110399915A (en) * 2019-07-23 2019-11-01 王英伟 A kind of Ultrasound Image Recognition Method and its system based on deep learning
CN113436109B (en) * 2021-07-08 2022-10-14 清华大学 Ultrafast high-quality plane wave ultrasonic imaging method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1757381A (en) * 2005-09-21 2006-04-12 北京市肿瘤防治研究所 Method for improving resolution of ultrasonic image-forming image, and ultrasonic contrast image-forming apparatus
CN103330576A (en) * 2013-06-09 2013-10-02 西安交通大学 Micro-elasticity imaging method based on tissue microbubble dynamics model
CN105574820A (en) * 2015-12-04 2016-05-11 南京云石医疗科技有限公司 Deep learning-based adaptive ultrasound image enhancement method
CN106991445A (en) * 2017-04-05 2017-07-28 重庆大学 A kind of ultrasonic contrast tumour automatic identification and detection method based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5417215A (en) * 1994-02-04 1995-05-23 Long Island Jewish Medical Center Method of tissue characterization by ultrasound
US6514204B2 (en) * 2000-07-20 2003-02-04 Riverside Research Institute Methods for estimating tissue strain
CN103381096B (en) * 2013-04-19 2015-04-15 西安交通大学 Blood perfusion separation detecting and imaging method for bone surface capillary
BR112016013880A2 (en) * 2013-12-20 2017-08-08 Koninklijke Philips Nv SYSTEMS FOR TRACKING A PENETRANT INSTRUMENT, AND METHOD FOR TRACKING A PENETRANT INSTRUMENT UNDER THE CONTROL OF A WORKSTATION
US9589374B1 (en) * 2016-08-01 2017-03-07 12 Sigma Technologies Computer-aided diagnosis system for medical images using deep convolutional neural networks
US11832969B2 (en) * 2016-12-22 2023-12-05 The Johns Hopkins University Machine learning approach to beamforming

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1757381A (en) * 2005-09-21 2006-04-12 北京市肿瘤防治研究所 Method for improving resolution of ultrasonic image-forming image, and ultrasonic contrast image-forming apparatus
CN103330576A (en) * 2013-06-09 2013-10-02 西安交通大学 Micro-elasticity imaging method based on tissue microbubble dynamics model
CN105574820A (en) * 2015-12-04 2016-05-11 南京云石医疗科技有限公司 Deep learning-based adaptive ultrasound image enhancement method
CN106991445A (en) * 2017-04-05 2017-07-28 重庆大学 A kind of ultrasonic contrast tumour automatic identification and detection method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Segmentation of breast anatomy for automated whole breast ultrasound images with boundary regularized convolutional encoder–decoder network;Baiying Lei等;《Neurocomputing》;20180922;第178-186页 *

Also Published As

Publication number Publication date
CN109965905A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
Lahoud et al. Zero-learning fast medical image fusion
CN109965905B (en) Contrast region detection imaging method based on deep learning
CN111667447A (en) Intravascular image fusion method and system and image acquisition device
Rihana et al. Automated algorithm for ovarian cysts detection in ultrasonogram
Jabbar et al. Using convolutional neural network for edge detection in musculoskeletal ultrasound images
CN109919929A (en) A kind of fissuring of tongue feature extracting method based on wavelet transformation
Magud et al. Medical ultrasound image speckle noise reduction by adaptive median filter
Ali et al. Detection and segmentation of hemorrhage stroke using textural analysis on brain CT images
CN112950644B (en) Neonatal brain image segmentation method and model construction method based on deep learning
Goebel et al. Reading imagined letter shapes from the mind’s eye using real-time 7 tesla fMRI
CN103871057A (en) Magnetic resonance image-based bone segmentation method and system thereof
CN112741651B (en) Method and system for processing ultrasonic image of endoscope
Ullah et al. Histogram equalization based enhancement and mr brain image skull stripping using mathematical morphology
Muthiah et al. Fusion of MRI and PET images using deep learning neural networks
CN115553816A (en) Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method
CN111829956B (en) Photoacoustic endoscopic quantitative tomography method and system based on layered guidance of ultrasonic structure
CN111951241B (en) Method for measuring and displaying muscle deformation in aquatic animal exercise process
Almi'ani et al. A modified region growing based algorithm to vessel segmentation in magnetic resonance angiography
El Zein et al. A Deep Learning Framework for Denoising MRI Images using Autoencoders
Bhalla et al. Automatic fetus head segmentation in ultrasound images by attention based encoder decoder network
KR101024857B1 (en) Ultrasound system and method for performing color modeling processing on three-dimensional ultrasound image
Afzal et al. A novel medical image fusion scheme using weighted sum of multi-scale fusion results
US11410348B2 (en) Imaging method and device
CN115393301B (en) Image histology analysis method and device for liver two-dimensional shear wave elastic image
Kong et al. MRI and SPECT image fusion using saliency capturing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant