CN115553816A - Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method - Google Patents

Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method Download PDF

Info

Publication number
CN115553816A
CN115553816A CN202211262883.4A CN202211262883A CN115553816A CN 115553816 A CN115553816 A CN 115553816A CN 202211262883 A CN202211262883 A CN 202211262883A CN 115553816 A CN115553816 A CN 115553816A
Authority
CN
China
Prior art keywords
dimensional
convolution unit
output
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211262883.4A
Other languages
Chinese (zh)
Inventor
郑锐
李佳文
陈曼
黄芸谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tong Ren Hospital
ShanghaiTech University
Original Assignee
Shanghai Tong Ren Hospital
ShanghaiTech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tong Ren Hospital, ShanghaiTech University filed Critical Shanghai Tong Ren Hospital
Priority to CN202211262883.4A priority Critical patent/CN115553816A/en
Publication of CN115553816A publication Critical patent/CN115553816A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0891Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4411Device being modular
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4427Device being portable or laptop-like
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention provides a portable three-dimensional carotid artery ultrasonic automatic diagnosis system which is characterized by comprising a data acquisition module, an automatic segmentation network, an automatic diagnosis network and a three-dimensional reconstruction and visualization module. The invention further provides a portable three-dimensional carotid artery ultrasonic automatic diagnosis method based on the system. In terms of data acquisition, the present invention reduces artifacts during reconstruction by specifying a standard scan flow. Meanwhile, the three-dimensional reconstruction result is smoother by filtering the position information and other methods. In addition, in the aspect of post-processing of data, the invention realizes automation, intellectualization and visualization of three-dimensional carotid artery ultrasonic image segmentation and analysis by utilizing a deep learning technology.

Description

Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method
Technical Field
The invention relates to medical imaging and ultrasound carotid artery vascular imaging techniques.
Background
In recent years, stroke due to carotid atherosclerosis is one of the leading causes of death. The pathological manifestations of carotid atherosclerosis are an increase in intima-media thickness and the appearance of plaque in the carotid arteries. With the continuous development of plaque, on one hand, the lumen of a carotid blood vessel is narrowed or even completely blocked, and the blood flow in the blood vessel is blocked, so that the oxygen supply of the brain of a human is influenced. On the other hand, the plaque may be exfoliated, ulcerated, and locally damaged to make the blood in a highly coagulated state, causing the aggregation of red blood cells and platelets, forming thrombus and causing stroke. According to 2018 reports of cardiovascular diseases in China, about 2.9 million people in China have cardiovascular diseases, 1300 million people have stroke patients, and the incidence rate of atherosclerosis of the carotid artery of the people above 40 years old is about 36.2%.
The ultrasound used in clinic is mostly two-dimensional ultrasound, although the ultrasound has the advantages of high speed and the like, the image quality is poor, the information providing dimension is limited, and the ultrasound is easily influenced by the experience of an operator. On the other hand, since the examination of carotid atherosclerosis is mainly performed by experienced sonographers in hospitals, patients in less developed or remote areas are difficult to obtain a timely diagnosis and treatment. Meanwhile, the population base of China is large at present, the carotid ultrasonic examination population is large due to severe aging, the appointment time is long, the diagnosis procedure is complicated, and a large number of carotid patients bring huge burden to a medical system.
Three-dimensional ultrasound imaging can provide richer dimensional information, reducing dependency on operator experience, while quantitatively giving the volume size of a region of interest (ROI). In the diagnosis of carotid atherosclerosis, three-dimensional ultrasound can directly provide information such as the volume size, morphological characteristics, echo intensity and the like of plaque, and is helpful for an ultrasonic doctor to make a more accurate diagnosis. Meanwhile, the portable equipment has wider application scenes, so that the screening of the carotid atherosclerosis in large-scale communities and remote areas becomes possible.
The existing three-dimensional carotid artery ultrasound usually adopts mechanical arm type scanning imaging, although the mechanical arm type scanning imaging has the advantages of stable imaging, simple reconstruction algorithm and the like, the degree of freedom of the mechanical arm type scanning is limited so that the two-dimensional imaging quality is poor. The portable three-dimensional unconstrained ultrasound utilizes a magnetic positioning type three-dimensional positioning method, so that the scanning mode can be more free. The conventional portable unconstrained three-dimensional ultrasonic imaging related method mainly comprises an unconstrained scanning method and a voxel-based three-dimensional real-time bone imaging method, for example, the invention patent application with the application number of CN201911132940.5 discloses an unconstrained scanning method and a voxel-based three-dimensional real-time bone imaging method, and the invention patent application with the application number of CN202010165914.9 discloses a handheld unconstrained scanning wireless three-dimensional ultrasonic real-time voxel imaging system, but the related method or system has the following problems in three-dimensional carotid artery ultrasonic reconstruction and automatic identification:
1) Because the three-dimensional carotid artery ultrasonic imaging is more precise, the used probe needs to be replaced by a linear array probe with higher imaging resolution.
2) If the carotid artery ultrasonic imaging is performed by using the three-dimensional imaging step in the related method, the carotid artery ultrasonic images at the same position and in different time are inconsistent due to factors such as involuntary breathing of a subject during scanning of the carotid artery, pulsation of blood vessels and the like, so that artifacts appear in reconstruction.
3) As the resolution requirement of carotid artery three-dimensional ultrasound on reconstruction is higher, involuntary shaking of the hands of a scanner in the scanning process can affect the quality of the reconstructed image.
4) The related system can only provide the result of three-dimensional ultrasonic imaging, does not have specific algorithm and flow of automatic carotid artery segmentation and diagnosis, and cannot realize automatic segmentation, identification and visualization of carotid artery plaques.
Disclosure of Invention
The purpose of the invention is: provides a three-dimensional ultrasonic automatic carotid atherosclerosis examination technology combining a portable handheld three-dimensional ultrasonic system and a deep learning technology.
In order to achieve the above object, one technical solution of the present invention is to provide a portable three-dimensional carotid artery ultrasound automatic diagnosis system, which is characterized by comprising a data acquisition module, an automatic segmentation network, an automatic diagnosis network, and a three-dimensional reconstruction and visualization module, wherein:
a data acquisition module for acquiring a series of carotid artery two-dimensional B-mode images of the subject and corresponding position information thereof, the series of carotid artery two-dimensional B-mode images being further defined as a two-dimensional ultrasound B-mode image sequence;
the automatic segmentation network is used for deducing an LIB region and an MAB region in each two-dimensional ultrasound B-mode image in the two-dimensional ultrasound B-mode image sequence and generating a mask corresponding to the LIB region and the MAB region, wherein the MAB region is a region between intima-media boundaries in a blood vessel, and the LIB region is a region between intima-media and lumen boundaries;
the LIB area and the MAB area obtained by automatic segmentation network reasoning and a mask corresponding to the LIB area and the MAB area are used as the input of an automatic diagnosis network, wherein the LIB area and the MAB area are used as the image input, and the mask is used as the label input; the automatic diagnosis network consists of two symmetrical feature extraction networks and a feature fusion network, wherein image input and label input are respectively input into the two feature extraction networks to obtain image features and label features with the same dimensionality, the image features and the label features are spliced by channel dimensionality to be used as input of the feature fusion network, and the feature fusion network outputs a classification result, namely whether plaque exists or not;
the three-dimensional reconstruction and visualization module firstly carries out filtering smoothing or regularization on the three-dimensional position information and then carries out three-dimensional reconstruction, and the three-dimensional reconstruction and visualization module comprises the following steps:
carrying out low-pass filtering on the acquired three-dimensional position information, and filtering out high-frequency components in the three-dimensional position information, namely hand shaking of an operator to obtain smooth three-dimensional position information;
for the three-dimensional position information with the track backspacing, performing data rearrangement on the three-dimensional position information with the backspacing and the corresponding two-dimensional image according to the information before and after the position by adopting key frame analysis so as to avoid the occurrence of three-dimensional reconstruction artifacts caused by the difference of two-dimensional images at the same or similar positions;
after a two-dimensional MAB region and an LIB region are obtained, a true three-dimensional model of the carotid artery blood vessel is obtained by combining the smoothed three-dimensional position information and utilizing a reverse mapping method based on voxels;
and performing volume rendering on the reconstructed three-dimensional model to obtain a three-dimensional visual model of the carotid artery blood vessel.
Preferably, the automatic segmentation network adopts a U-Net structure, and a batch normalization layer is accessed behind each convolution layer of the U-Net structure.
Preferably, the automatic segmentation network includes a convolution unit i, a pooling layer i, a convolution unit ii, a pooling layer ii, a convolution unit iii, a pooling layer iii, a convolution unit iv, a pooling layer iv, a convolution unit v, an upsampling layer i, a convolution unit iii, an upsampling layer ii, a convolution unit seven, an upsampling layer iii, a convolution unit eight, an upsampling layer iv, a convolution unit nine, and a full connection layer, outputs of the convolution unit i, the convolution unit ii, the convolution unit iii, and the convolution unit iv are directly spliced with outputs of the upsampling layer i, the upsampling layer ii, the upsampling layer iii, and the upsampling layer iv according to a channel dimension and then input to the convolution unit six, the convolution unit seven, the convolution unit eight, and the convolution unit nine, outputs of the convolution unit six, the convolution unit seven, the convolution unit eight, and the convolution unit nine are gradually upsampled by the upsampling layer, and the size of the image is enlarged, wherein:
inputting an image into a convolution unit I; the output of the convolution unit one is as follows: on one hand, the data is output to a convolution unit II through a pooling layer I; on the other hand, the output is output to a convolution unit nine, and the output of the upper sampling layer four are directly spliced according to the channel dimension and then input to the convolution unit nine;
and the output of the convolution unit II is as follows: on one hand, the data is output to a convolution unit III through a pooling layer II; on the other hand, the output is output to a convolution unit eight, and the output is directly spliced with the output of the upper sampling layer three according to the channel dimension and then input into the convolution unit eight;
and (4) outputting of a convolution unit III: on one hand, the data is output to a convolution unit IV through a pooling layer III; on the other hand, the output is output to a convolution unit seven, and the output of the upper sampling layer two are directly spliced according to the channel dimension and then input to the convolution unit seven;
and the output of the convolution unit four is as follows: on one hand, the data is output to a convolution unit V through a pooling layer IV; on the other hand, the output is output to a convolution unit six, and the output of the upper sampling layer I are directly spliced according to the channel dimension and then input into the convolution unit six;
the output of the convolution unit V is input into an up-sampling layer I, the output of the convolution unit VI is input into an up-sampling layer II, the output of the convolution unit VII is input into an up-sampling layer III, and the output of the convolution unit VIII is input into an up-sampling layer IV;
and the output of the convolution unit nine passes through a full connection layer and then outputs a segmentation graph.
Preferably, the training of the automatically segmented network comprises the steps of:
after a two-dimensional ultrasonic B-mode image sequence and corresponding position information are obtained, manually marking an MAB region and an LIB region of each two-dimensional ultrasonic B-mode image;
after marking is finished, preprocessing a two-dimensional ultrasonic B-mode image, wherein the preprocessing comprises image size standardization, gray scale stretching and data enhancement, and the preprocessing comprises the following steps of:
image size normalization processing: resetting the two-dimensional ultrasonic B-mode image through nearest neighbor interpolation;
gray stretching: changing the image intensity to between 0 and 1;
data enhancement: and performing online data enhancement.
And (3) forming a two-dimensional ultrasonic image sequence by all the preprocessed two-dimensional ultrasonic B-mode images, inputting the two-dimensional ultrasonic image sequence and label information of manually marked MAB regions and LIB regions into an automatic segmentation network for training, wherein a loss function of the automatic segmentation network consists of a Dice loss function and a cross entropy loss function.
Preferably, the LIB region and the MAB region obtained by the automatic segmentation network inference and the corresponding masks are respectively clipped and adjusted to the same size, and then are used as image input and label input to be respectively input into the two feature extraction networks.
Preferably, the training of the automatic diagnostic network comprises the steps of:
obtaining a two-dimensional ultrasonic B-mode image through a data acquisition module;
after the two-dimensional ultrasound B-mode images are preprocessed, all the preprocessed two-dimensional ultrasound B-mode images form a two-dimensional ultrasound image sequence, wherein the preprocessing of the two-dimensional ultrasound B-mode images specifically comprises the following steps:
cutting the MAB region and the LIB region in each two-dimensional ultrasound B-mode image, setting the image intensity in the LIB region as 0, and setting the image intensity of the MAB region as the image intensity of the original blood vessel wall region; cutting masks of the MAB area and the LIB area, and adjusting the sizes of the masks to be the same as the sizes of images of the MAB area and the LIB area to obtain label input;
performing training data on-line enhancement on image input and label input of the automatic diagnosis network;
and manually marking whether each two-dimensional ultrasonic B-mode image has a plaque or not, generating a label corresponding to the two-dimensional ultrasonic image sequence, inputting the two-dimensional ultrasonic image sequence and the corresponding label into an automatic diagnosis network for training, wherein a cross entropy loss function is adopted as a loss function of the automatic diagnosis network.
Another technical solution of the present invention is to provide a portable three-dimensional carotid artery ultrasonic automatic diagnosis method based on the foregoing system, which is characterized by comprising the following steps:
step 1, the subject lies on his back, and the head is rotated in the direction of the scanned side to expose the skin of the neck on one side. Then, a researcher holds the portable ultrasonic probe and sweeps the portable ultrasonic probe to the bifurcation from the far-end common carotid artery at a constant speed along the carotid artery blood vessel, so that a series of carotid artery blood vessel two-dimensional B-mode images and corresponding position information are acquired, and the series of carotid artery blood vessel two-dimensional B-mode images are further defined as a two-dimensional ultrasonic B-mode image sequence;
in order to obtain clearer three-dimensional reconstruction image quality, the following scanning modes are followed when carotid scanning is carried out:
the scanning speed is kept uniform and unchanged. Controlling the time of one carotid scanning to be between 5 seconds and 10 seconds;
keeping the scanning direction unchanged, and avoiding the scanning track from returning;
the smooth change of the scanning track is kept, and the large-amplitude jitter in the scanning process is avoided;
step 2, inputting the obtained two-dimensional ultrasonic B-mode image sequence into a trained automatic segmentation network, and reasoning an LIB region and an MAB region in each two-dimensional ultrasonic B-mode image in the two-dimensional ultrasonic B-mode image sequence by the automatic segmentation network to generate a mask corresponding to the LIB region and the MAB region;
3, generating a three-dimensional visualization model of the carotid artery blood vessel by using a three-dimensional reconstruction and visualization module based on the LIB region and the MAB region of the two-dimensional ultrasound B-mode image sequence obtained in the step 2;
and 4, each cross section image of the three-dimensional visualization model of the carotid artery blood vessel is a slice image, the slice images and the mask obtained in the step 2 are input into the trained automatic diagnosis network together, the diagnosis result of each slice is obtained through reasoning, if the plaque exists in N continuous slices judged by the automatic diagnosis network, the subject is judged to have the carotid atherosclerosis, otherwise, the subject is judged not to have the carotid atherosclerosis, wherein N is an experience threshold value.
In terms of data acquisition, the present invention reduces artifacts during reconstruction by specifying a standard scan flow. Meanwhile, the three-dimensional reconstruction result is smoother by filtering the position information and other methods. In addition, in the aspect of post-processing of data, the invention realizes automation, intellectualization and visualization of three-dimensional carotid artery ultrasonic image segmentation and analysis by utilizing a deep learning technology.
Drawings
FIG. 1 is a flow chart of the overall system usage;
FIG. 2 is a diagram of an auto-partition network architecture;
FIG. 3 is a diagram of an automatic diagnostic network architecture;
FIG. 4 illustrates the comparison of the automatic segmentation result and the manual labeling result, wherein the lighter lines are the results of automatic algorithm identification and the darker lines are the results of manual labeling, which shows that the automatic algorithm segmentation and the manual labeling segmentation are substantially consistent;
FIG. 5 illustrates a comparison of position information before and after filtering, where the more dithered curve is the position information before filtering and the smoother curve is the relative position information after filtering;
fig. 6A and 6B are longitudinal-sectional comparison diagrams of three-dimensional reconstruction of a blood vessel before and after filtering, fig. 6A is reconstruction after filtering, and fig. 6B is direct reconstruction, so that it can be observed that a reconstructed image after filtering is smoother;
fig. 7A and 7B are three-dimensional visualizations comparing the automatic segmentation result and the manual labeling result, fig. 7A is the automatic segmentation result, and fig. 7B is the manual labeling segmentation result.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
The embodiment discloses a portable three-dimensional carotid artery ultrasonic automatic diagnosis system, which comprises:
one) data acquisition module
The data acquisition module is used for scanning the carotid artery through the portable ultrasonic probe, so as to acquire a series of carotid artery two-dimensional B-mode images of about 4cm from the common carotid artery to the bifurcation of the subject and corresponding position information of the images, and the series of carotid artery two-dimensional B-mode images are further defined as a two-dimensional ultrasonic B-mode image sequence.
When scanning the carotid artery, the subject was in the supine position, and the head was rotated in the direction of the scanned side to expose the skin of the neck on one side. The investigator then holds the portable ultrasound probe and sweeps the carotid artery blood vessel along the carotid artery from the distal carotid artery at the bifurcation to the bifurcation at a uniform velocity and straight line, thereby acquiring a series of carotid artery blood vessel two-dimensional B-mode images about 4cm from the carotid artery to the bifurcation and their corresponding location information.
In order to obtain clearer three-dimensional reconstruction image quality, the following scanning mode is adopted when carotid scanning is carried out:
the scanning speed is kept uniform and unchanged. The time for one carotid scan is controlled to be between 5 seconds and 10 seconds.
The scanning direction is kept unchanged, and the scanning track is prevented from backing.
The smooth change of the scanning track is kept, and the large-amplitude jitter in the scanning process is avoided.
Two) automatic segmentation network
The automatic segmentation network adopts an improved U-Net structure. Compared with the original U-Net network, the automatic segmentation network provided by the invention has the advantages that a batch normalization (Batchnorm) layer is connected behind each convolution layer to accelerate network convergence and improve segmentation accuracy. Specifically, as shown in fig. 2, the automatic segmentation network is composed of 9 convolution units, 4 maximum pooling layers, 4 upsampling layers and 1 full-connection layer (a convolution layer with a size of 1 × 1), wherein the convolution unit i, the pooling layer i, the convolution unit ii, the pooling layer ii, the convolution unit iii, the pooling layer iii, the convolution unit iv, the pooling layer iv, the convolution unit iv, the upsampling layer i, the convolution unit iii, the upsampling layer ii, the convolution unit vii, the upsampling layer iii, the convolution unit iv, the upsampling layer iv, the convolution unit iv and the full-connection layer are sequentially connected. The output dimensions of the first convolution unit, the second convolution unit, the third convolution unit and the fourth convolution unit are 224 x 64, 112 x 128, 56 x 256 and 28 x 512 respectively, and the outputs of the first convolution unit, the second convolution unit, the third convolution unit and the fourth convolution unit are directly spliced with the outputs of the first upsampling layer, the second upsampling layer, the third upsampling layer and the fourth upsampling layer according to the channel dimensions and then input to the sixth convolution unit, the seventh convolution unit, the eighth convolution unit and the ninth convolution unit respectively. After being processed by the convolution unit six, the convolution unit seven, the convolution unit eight, and the convolution unit nine, the output dimensions thereof are 28 × 512, 56 × 256, 112 × 128, and 224 × 64, respectively. And the outputs of the convolution unit six, the convolution unit seven, the convolution unit eight and the convolution unit nine are up-sampled gradually through an up-sampling layer, so that the size of the image is enlarged. And inputting the image into a convolution unit I. The output of the convolution unit one is as follows: on one hand, the data is output to a convolution unit II through a pooling layer I; and on the other hand, the output is output to a convolution unit nine, and the output of the upper sampling layer four are directly spliced according to the channel dimension and then input to the convolution unit nine. And the output of the convolution unit II is as follows: on one hand, the data is output to a convolution unit III through a pooling layer II; and on the other hand, the output is output to a convolution unit eight, and the output of the upper sampling layer three are directly spliced according to the channel dimension and then input into the convolution unit eight. And the output of the convolution unit III is as follows: on one hand, the data is output to a convolution unit IV through a pooling layer III; and on the other hand, the output is output to a convolution unit seven, and the output of the upper sampling layer two are directly spliced according to the channel dimension and then input to the convolution unit seven. And the output of the convolution unit four is as follows: on one hand, the data is output to a convolution unit V through a pooling layer IV; and on the other hand, the output is output to a convolution unit six, and the output of the upper sampling layer I are directly spliced according to the channel dimension and then input into the convolution unit six. The output of the convolution unit five is input into the upsampling layer one, the output of the convolution unit six is input into the upsampling layer two, the output of the convolution unit seven is input into the upsampling layer three, and the output of the convolution unit eight is input into the upsampling layer four. And the output of the convolution unit nine passes through a full connection layer and then outputs a segmentation graph.
In the present embodiment, each convolution unit is composed of two basic convolution layers. Each base convolutional layer includes one convolutional layer, followed by a batch normalization layer and a Linear activation function (ReLU).
In an automatically split network: the kernel size of all convolutional layers is 3 x 3, the step length is 1, and the padding is 1; the kernel size of all pooling layers was 2 x 2 with a step size of 2.
The training of the automatic segmentation network comprises the following steps:
after the two-dimensional ultrasonic B-mode images of carotid vessels and corresponding position information are obtained by a data acquisition module, a marking software is adopted to manually mark a region (a media-acquired intima boundary, hereinafter called MAB region) between intima boundaries and a region (a lumen intima boundary, hereinafter called LIB region) between intima boundaries in each blood vessel of the two-dimensional ultrasonic B-mode images.
After the marking is completed, the two-dimensional ultrasound B-mode image is preprocessed. Preprocessing includes image size normalization, gray scale stretching, and data enhancement.
Image size normalization processing: the two-dimensional ultrasound B-mode image is reset by nearest neighbor interpolation.
Gray stretching: the image intensity is changed to between 0 and 1 using the following formula:
Figure BDA0003890536310000081
in the formula, I represents image intensity.
Data enhancement: and random image scaling, inversion, rotation, gamma gray level transformation and the like are adopted for online data enhancement.
And all the preprocessed two-dimensional ultrasonic B-mode images form a two-dimensional ultrasonic image sequence, and the two-dimensional ultrasonic image sequence and label information of manually marked MAB and LIB are input into the automatic segmentation network for training. The loss function of the automatic segmentation network consists of a Dice loss function and a cross entropy loss function. After the training is completed, the model parameters of the automatic segmentation network are saved for subsequent steps.
Three) automatic diagnostic network
The automatic diagnosis network is composed of two symmetrical feature extraction networks and a feature fusion network, as shown in fig. 3. Specifically, each feature extraction network is formed by sequentially connecting a basic convolution unit I, a basic convolution unit II, a maximum pooling layer I, a basic convolution unit III, a basic convolution unit IV, a maximum pooling layer II, a basic convolution unit V, a basic convolution unit VI and a maximum pooling layer III. The input of the automatic diagnosis network comprises an image input and a label input, wherein the image input is a segmentation graph output by the automatic segmentation network, and the label input is obtained by cutting and adjusting labels of an MAB region and an LIB region output by the automatic segmentation network to 128 × 128. And respectively inputting the image input and the label input into two feature extraction networks to obtain image features and label features, wherein the dimensions of the image features and the label features are 16 × 96. And splicing the image features and the label features in channel dimensions to serve as the input of the feature fusion network.
The feature fusion network is formed by sequentially connecting a basic convolution unit thirteen, a basic convolution unit fourteen, a maximum pooling layer seven, a global average pooling layer and a full-connection layer. The output dimension of the maximum pooling layer seven is 96 × 16, after passing through the global average pooling layer, the output dimension is changed into 96 × 1, and after passing through the full-connection layer, the classification result is output, namely whether the patch exists or not.
A base convolution unit in an automated diagnostic network consists of two base convolution layers, each of which includes a convolution layer followed by a batch normalization (batcnorm) layer and a Linear activation unit (ReLU).
In an automatic diagnostic network: the kernel size of all convolutional layers is 3 x 3, the step size is 1, and the padding is 1; all largest pooling layers have a kernel size of 2 x 2 with a step size of 2.
The training of the automatic diagnostic network comprises the following steps:
obtaining a two-dimensional ultrasonic B-mode image through a data acquisition module;
after the two-dimensional ultrasonic B-mode images are preprocessed, all the preprocessed two-dimensional ultrasonic B-mode images form a two-dimensional ultrasonic image sequence;
whether plaque exists in the ultrasonic image is irrelevant to image information outside the boundary of the blood vessel tunica adventitia and inside the blood vessel tunica intima, and the network training can be accelerated by removing useless image areas, so that the accuracy is improved. Therefore, in this embodiment, the preprocessing the two-dimensional ultrasound B-mode image specifically includes the following steps:
cutting an MAB region and an LIB region in each two-dimensional ultrasonic B-mode image, setting the image intensity in the LIB region as 0, setting the image intensity of the MAB region as the image intensity of the original blood vessel wall region, and then adjusting the image intensity to 128 x 128, thus obtaining the image input of the automatic diagnosis network; cutting labels of an MAB region and an LIB region corresponding to the current two-dimensional ultrasonic B-mode image, and adjusting the size of the labels to 128 × 128 to obtain label input;
performing on-line enhancement on training data by adopting random image inversion, image rotation transformation and the like on image input and label input of an automatic diagnosis network;
and manually marking whether plaque exists in each two-dimensional ultrasonic B-mode image, generating a label corresponding to the two-dimensional ultrasonic image sequence, and inputting the two-dimensional ultrasonic image sequence and the corresponding label into an automatic diagnosis network for training. The loss function of the automatic diagnostic network adopts a cross entropy loss function. After the training is finished, the model parameters of the automatic diagnosis network are saved so as to carry out the subsequent steps.
Four) three-dimensional reconstruction and visualization module
And performing three-dimensional reconstruction of the carotid artery region based on the MAB region and the LIB region of the two-dimensional ultrasonic B-mode image sequence. Since an operator may have involuntary hand shake during a scanning process, directly using three-dimensional position information for reconstruction may cause artifacts in a reconstructed image. Therefore, the three-dimensional reconstruction and visualization module firstly carries out filtering smoothing or regularization on the three-dimensional position information and then carries out three-dimensional reconstruction. Specifically, the three-dimensional reconstruction and visualization module for realizing three-dimensional reconstruction comprises the following steps:
carrying out low-pass filtering on the acquired two-dimensional position information, and filtering out high-frequency components in the two-dimensional position information, namely hand shaking of an operator to obtain smooth three-dimensional position information;
for the three-dimensional position information with the track backspacing, key frame analysis is adopted, and the three-dimensional position information with the backspacing and the corresponding two-dimensional images are subjected to data rearrangement according to the information before and after the position, so that the phenomenon that the two-dimensional images at the same or similar positions are different, and further, three-dimensional reconstruction artifacts are caused is avoided. Specifically, the Z-axis values of the three-dimensional position information points obtained by scanning may be sorted first, and the two-dimensional images of the corresponding three-dimensional position points may be sorted at the same time, so as to obtain a clearer three-dimensional reconstructed blood vessel image.
The three-dimensional trajectory contrast before and after filtering is shown in fig. 5 by light and dark curves, and the three-dimensional reconstruction contrast of the blood vessel before and after filtering is shown in fig. 6A and 6B.
And after a two-dimensional MAB region and an LIB region are obtained, combining the smoothed three-dimensional position information, and reconstructing by using a reverse mapping method based on voxels to obtain a real three-dimensional model of the carotid artery blood vessel.
And performing volume rendering on the reconstructed three-dimensional model to obtain a three-dimensional visual model of the carotid artery blood vessel.
The portable three-dimensional carotid artery ultrasonic automatic diagnosis method based on the system is characterized by comprising the following steps:
step 1, the subject lies on his back, and the head is rotated in the direction of the scanned side to expose the skin of the neck on one side. Then, a portable ultrasonic probe is held by a researcher to sweep linearly to the bifurcation from the far end of the common carotid artery at a constant speed along the carotid artery vessel from the bifurcation, so that a series of carotid artery vessel two-dimensional B-mode images about 4cm from the common carotid artery to the bifurcation and corresponding position information thereof are acquired, and the series of carotid artery vessel two-dimensional B-mode images are further defined as a two-dimensional ultrasonic B-mode image sequence.
In order to obtain clearer three-dimensional reconstruction image quality, the following scanning modes are adopted when the carotid artery is scanned:
the scanning speed is kept uniform and unchanged. The time for one carotid scan is controlled to be between 5 seconds and 10 seconds.
The scanning direction is kept unchanged, and the scanning track is prevented from backing.
The smooth change of the scanning track is kept, and large-amplitude jitter in the scanning process is avoided.
And 2, inputting the acquired two-dimensional ultrasonic B-mode image sequence into a trained automatic segmentation network, and reasoning an LIB region and an MAB region in each two-dimensional ultrasonic B-mode image in the two-dimensional ultrasonic B-mode image sequence by the automatic segmentation network to generate a Mask (Mask) corresponding to the LIB region and the MAB region.
And 3, generating a three-dimensional visualization model of the carotid artery blood vessel by using a three-dimensional reconstruction and visualization module based on the LIB region and the MAB region of the two-dimensional ultrasound B-mode image sequence obtained in the step 2.
And 4, cutting each cross section image of the three-dimensional visualization model of the carotid artery blood vessel, adjusting the size of the image, inputting the image into a trained automatic diagnosis network, and reasoning to obtain a diagnosis result of each slice, namely whether plaque exists or not. In this embodiment, if the 5 consecutive slices are judged to have plaque by the automatic diagnosis network, the subject is judged to have carotid atherosclerosis, otherwise, the subject is judged to have no carotid atherosclerosis.
By applying the portable handheld three-dimensional ultrasonic automatic diagnosis technology for carotid atherosclerosis, the rapid three-dimensional imaging of inexperienced operators on carotid vascular regions can be realized, and meanwhile, the auxiliary diagnosis and three-dimensional visualization of carotid atherosclerosis can be realized by combining the artificial intelligence technology. Here the system was applied through 15 carotid artery data in the clinic to verify the effectiveness of the system.
The method comprises the following steps: data acquisition
Ultrasonic image data is obtained by a linear array ultrasonic probe (Clarius, L738-K, canada) with a frequency of 8M, and position information corresponding to each two-dimensional image is obtained by an electromagnetic positioning system (Polhemus, G4 unit, u.s.a). The size of the two-dimensional image obtained finally is 640 × 480, 256 gray levels.
Step two: automatic segmentation network training
After 40 carotid artery ultrasonic scanning data are obtained, the obtained two-dimensional ultrasonic image sequence is labeled by using labeling software. And randomly selecting 25 ultrasonic scanning data as a training set, and selecting 15 ultrasonic scanning data as a verification set. And (5) taking the carotid artery two-dimensional image and the label thereof in the training set as input data, and training to obtain the deep learning model of the automatic segmentation network.
Step three: automated diagnostic network training
And marking whether the plaque exists in the obtained two-dimensional ultrasonic image sequence. And C, preprocessing the images corresponding to the 25 ultrasonic scanning data selected in the step II, inputting the preprocessed ultrasonic images and the labels corresponding to the preprocessed ultrasonic images, and training to obtain the deep learning model of the automatic diagnosis network.
Step four: image automatic segmentation reasoning
And inputting the randomly selected ultrasonic image sequences of 15 verification sets into the automatic segmentation deep learning model obtained by training to obtain corresponding output labels. The results of the automatic segmentation are evaluated using expressions (2) (3).
Figure BDA0003890536310000121
HD(A,B)=max(hd(A,B),hd(B,A))#(3)
Wherein
hd(A,B)=max a∈A (min b∈B ||a-b||)#(4)
hd(B,A)=max b∈B (min a∈A ||b-a||)#(5)
Wherein, P and L are the results of the automatic segmentation network prediction and marking respectively. DSC is a performance index for evaluating a segmentation algorithm, a represents a proper subset of labels, B represents a proper subset of a segmentation network, HD (a, B) is an evaluation index for measuring a distance between the proper subsets a, B in space, HD (a, B), HD (B, a) represent one-way hausdorff distances from set a to set B and from set B to set a, respectively, | a-B | | | represents a distance paradigm of a point set a and a point set B.
The results of the automatic segmentation are shown in fig. 4. The numerical results of the automatic segmentation are shown in table 1 below.
Figure BDA0003890536310000122
TABLE 1 numerical comparison of automatic segmentation results with manually labeled results
Step five: three-dimensional reconstruction and visualization
And performing three-dimensional reconstruction on the labels of the MAB and the LIB obtained by automatic segmentation in the fourth step to obtain a three-dimensional structure of the blood vessel, and visualizing the three-dimensional structure of the blood vessel by using volume rendering. Fig. 6A and 6B show a visualization result diagram of a manually labeled three-dimensional structure of a blood vessel and a visualization result diagram of a three-dimensional structure of an automatic segmentation algorithm.
Step six: automated diagnostic network reasoning
And after the image automatic segmentation reasoning is completed in the fourth step, MAB and LIB labels of the blood vessels are obtained by predicting the ultrasonic image sequences of the 15 verification sets. And inputting the cross section slices of the preprocessed blood vessel three-dimensional image into an automatic diagnosis deep learning model to obtain a prediction result of automatic diagnosis of each two-dimensional ultrasonic image. The comparison of the automatic diagnosis results with the labeling results is shown in table 2 below.
Figure BDA0003890536310000131
TABLE 2 Listing of image-level automated and manually labeled diagnostic results
The sensitivity, specificity and accuracy of the prediction results were 0.73,0.97 and 0.91, respectively. And for the automatic diagnosis result of the individual patient, firstly, performing cross section slicing on the three-dimensional blood vessel structure obtained in the step five, performing automatic diagnosis network reasoning on each slice, and judging that the data is speckled if 5 continuous slices are predicted to be speckled by the automatic diagnosis network. The data-level automated diagnostic results are compared to the labeling results as shown in table 3 below. The sensitivity, specificity and accuracy of the prediction results were 0.81,0.75 and 0.80, respectively.
Figure BDA0003890536310000132
Table 3 tabulation of data-level automated diagnosis results with manually labeled diagnosis results.

Claims (7)

1. The portable three-dimensional carotid artery ultrasonic automatic diagnosis system is characterized by comprising a data acquisition module, an automatic segmentation network, an automatic diagnosis network, a three-dimensional reconstruction and visualization module, wherein:
a data acquisition module for acquiring a series of carotid artery blood vessel two-dimensional B-mode images of the subject and corresponding position information thereof, the series of carotid artery blood vessel two-dimensional B-mode images being further defined as a two-dimensional ultrasound B-mode image sequence;
the automatic segmentation network is used for reasoning an LIB region and an MAB region in each two-dimensional ultrasound B-mode image in the two-dimensional ultrasound B-mode image sequence and generating a mask corresponding to the LIB region and the MAB region, wherein the MAB region is a region between adventitial boundaries in a blood vessel, and the LIB region is a region between intima and lumen boundaries;
the LIB area and the MAB area obtained by automatic segmentation network reasoning and a mask corresponding to the LIB area and the MAB area are used as the input of an automatic diagnosis network, wherein the LIB area and the MAB area are used as the image input, and the mask is used as the label input; the automatic diagnosis network consists of two symmetrical feature extraction networks and a feature fusion network, wherein image input and label input are respectively input into the two feature extraction networks to obtain image features and label features with the same dimensionality, the image features and the label features are spliced by channel dimensionality to be used as input of the feature fusion network, and the feature fusion network outputs a classification result, namely whether plaque exists or not;
the three-dimensional reconstruction and visualization module firstly carries out filtering smoothing or regularization on the three-dimensional position information and then carries out three-dimensional reconstruction, and the method comprises the following steps:
carrying out low-pass filtering on the acquired three-dimensional position information, and filtering out high-frequency components in the three-dimensional position information, namely hand shaking of an operator to obtain smooth three-dimensional position information;
for the three-dimensional position information with the track backspacing, adopting key frame analysis to rearrange the three-dimensional position information with the backspacing and the corresponding two-dimensional image according to the information before and after the position so as to avoid the occurrence of the three-dimensional reconstruction artifacts caused by the difference of the two-dimensional images at the same or similar positions;
after a two-dimensional MAB region and an LIB region are obtained, a true three-dimensional model of the carotid artery blood vessel is obtained by combining the smoothed three-dimensional position information and utilizing a reverse mapping method based on voxels;
and performing volume rendering on the reconstructed three-dimensional model to obtain a three-dimensional visual model of the carotid artery blood vessel.
2. The portable three-dimensional carotid artery ultrasound automatic diagnosis system of claim 1, wherein the automatic segmentation network adopts a U-Net structure, and a batch normalization layer is connected behind each convolution layer of the U-Net structure.
3. The portable three-dimensional carotid artery ultrasound automatic diagnosis system according to claim 1, wherein the automatic segmentation network comprises a convolution unit I, a pooling layer I, a convolution unit II, a pooling layer II, a convolution unit III, a pooling layer III, a convolution unit IV, a pooling layer IV, a convolution unit V, an upsampling layer I, a convolution unit VI, an upsampling layer II, a convolution unit VII, an upsampling layer III, a convolution unit eight, an upsampling layer IV, a convolution unit nine and a full connection layer, the outputs of the convolution unit I, the convolution unit II, the convolution unit III and the convolution unit IV are directly spliced with the outputs of the upsampling layer I, the upsampling layer II, the upsampling layer III and the upsampling layer IV according to channel dimensions and then input into the convolution unit VI, the convolution unit VII, the convolution unit eight and the convolution unit nine, the outputs of the convolution unit VI, the convolution unit VII, the convolution unit VIII and the convolution unit IX are gradually upsampled through the upsampling layer to enlarge the size of the image, wherein:
inputting an image into a convolution unit I; the output of the convolution unit one is as follows: on one hand, the data is output to a convolution unit II through a pooling layer I; on the other hand, the output is output to a convolution unit nine, and the output of the upper sampling layer four are directly spliced according to the channel dimension and then input to the convolution unit nine;
and the output of the convolution unit II is as follows: on one hand, the data is output to a convolution unit III through a pooling layer II; on the other hand, the output is output to a convolution unit eight, and the output is directly spliced with the output of the upper sampling layer three according to the channel dimension and then input into the convolution unit eight;
and the output of the convolution unit III is as follows: on one hand, the data is output to a convolution unit IV through a pooling layer III; on the other hand, the output is output to a convolution unit seven, and the output of the upper sampling layer two are directly spliced according to the channel dimension and then input into the convolution unit seven;
and the output of the convolution unit four is as follows: on one hand, the data is output to a convolution unit V through a pooling layer IV; on the other hand, the output is output to a convolution unit six, and the output of the upper sampling layer I are directly spliced according to the channel dimension and then input into the convolution unit six;
the output of the convolution unit V is input into an upper sampling layer I, the output of the convolution unit VI is input into an upper sampling layer II, the output of the convolution unit VII is input into an upper sampling layer III, and the output of the convolution unit VIII is input into an upper sampling layer IV;
and the output of the convolution unit nine passes through a full connection layer and then outputs a segmentation graph.
4. The portable three-dimensional carotid artery ultrasound automatic diagnosis system of claim 1, wherein the training of the automatic segmentation network comprises the steps of:
after a two-dimensional ultrasonic B-mode image sequence and corresponding position information are obtained, manually marking an MAB region and an LIB region of each two-dimensional ultrasonic B-mode image;
after marking is finished, preprocessing is carried out on the two-dimensional ultrasonic B-mode image, and the preprocessing comprises image size standardization, gray level stretching and data enhancement, wherein:
image size normalization processing: resetting the two-dimensional ultrasonic B-mode image through nearest neighbor interpolation;
gray stretching: changing the image intensity to between 0 and 1;
data enhancement: and performing online data enhancement.
And (3) forming a two-dimensional ultrasonic image sequence by all the preprocessed two-dimensional ultrasonic B-mode images, inputting the two-dimensional ultrasonic image sequence and label information of manually marked MAB regions and LIB regions into an automatic segmentation network for training, wherein a loss function of the automatic segmentation network consists of a Dice loss function and a cross entropy loss function.
5. The portable three-dimensional carotid artery ultrasonic automatic diagnosis system of claim 1, wherein the LIB region and the MAB region inferred by the automatic segmentation network and the corresponding mask are respectively cut and adjusted to the same size, and then are used as image input and label input to be respectively input into the two feature extraction networks.
6. The portable three-dimensional carotid artery ultrasound automatic diagnostic system of claim 1, characterized in that the training of the automatic diagnostic network comprises the following steps:
obtaining a two-dimensional ultrasonic B-mode image through a data acquisition module;
after the two-dimensional ultrasonic B-mode images are preprocessed, the preprocessed two-dimensional ultrasonic B-mode images form a two-dimensional ultrasonic image sequence, wherein the preprocessing of the two-dimensional ultrasonic B-mode images specifically comprises the following steps:
cutting the MAB region and the LIB region in each two-dimensional ultrasound B-mode image, setting the image intensity in the LIB region as 0, and setting the image intensity of the MAB region as the image intensity of the original blood vessel wall region; cutting masks of the MAB area and the LIB area, and adjusting the sizes of the masks to be the same as the sizes of images of the MAB area and the LIB area to obtain label input;
performing training data on-line enhancement on image input and label input of an automatic diagnosis network;
and manually marking whether each two-dimensional ultrasonic B-mode image has a plaque or not, generating a label corresponding to the two-dimensional ultrasonic image sequence, inputting the two-dimensional ultrasonic image sequence and the corresponding label into an automatic diagnosis network for training, wherein a cross entropy loss function is adopted as a loss function of the automatic diagnosis network.
7. The portable three-dimensional carotid artery ultrasonic automatic diagnosis method realized based on the system of claim 1 is characterized by comprising the following steps:
step 1, the subject lies on his back, and the head is rotated in the direction of the scanned side to expose the skin of the neck on one side. Then, a researcher holds the portable ultrasonic probe, and scans the portable ultrasonic probe from the far-end common carotid artery at a uniform speed to the bifurcation along the carotid artery vessel, so as to acquire a series of carotid artery vessel two-dimensional B-mode images and corresponding position information thereof, and further define the series of carotid artery vessel two-dimensional B-mode images as a two-dimensional ultrasonic B-mode image sequence;
in order to obtain clearer three-dimensional reconstruction image quality, the following scanning modes are followed when carotid scanning is carried out:
the scanning speed is kept uniform and unchanged. Controlling the time of one carotid scanning to be between 5 seconds and 10 seconds;
keeping the scanning direction unchanged, and avoiding the scanning track from returning;
the smooth change of the scanning track is kept, and the large-amplitude jitter in the scanning process is avoided;
step 2, inputting the obtained two-dimensional ultrasonic B-mode image sequence into a trained automatic segmentation network, and reasoning an LIB region and an MAB region in each two-dimensional ultrasonic B-mode image in the two-dimensional ultrasonic B-mode image sequence by the automatic segmentation network to generate a mask corresponding to the LIB region and the MAB region;
3, generating a three-dimensional visual model of the carotid artery blood vessel by using a three-dimensional reconstruction and visualization module based on the LIB region and the MAB region of the two-dimensional ultrasonic B-mode image sequence obtained in the step 2;
and 4, inputting the slice image and the mask obtained in the step 2 into the trained automatic diagnosis network, and deducing to obtain a diagnosis result of each slice, wherein if the plaque exists in N continuous slices judged by the automatic diagnosis network, the subject is judged to have the carotid atherosclerosis, otherwise, the subject is judged to have no the carotid atherosclerosis, wherein N is an experience threshold value.
CN202211262883.4A 2022-10-14 2022-10-14 Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method Pending CN115553816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211262883.4A CN115553816A (en) 2022-10-14 2022-10-14 Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211262883.4A CN115553816A (en) 2022-10-14 2022-10-14 Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method

Publications (1)

Publication Number Publication Date
CN115553816A true CN115553816A (en) 2023-01-03

Family

ID=84746446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211262883.4A Pending CN115553816A (en) 2022-10-14 2022-10-14 Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method

Country Status (1)

Country Link
CN (1) CN115553816A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645349A (en) * 2023-05-29 2023-08-25 沈阳工业大学 Image processing method and system for improving three-dimensional display effect of blood vessel

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645349A (en) * 2023-05-29 2023-08-25 沈阳工业大学 Image processing method and system for improving three-dimensional display effect of blood vessel
CN116645349B (en) * 2023-05-29 2024-03-19 沈阳工业大学 Image processing method and system for improving three-dimensional display effect of blood vessel

Similar Documents

Publication Publication Date Title
US8483488B2 (en) Method and system for stabilizing a series of intravascular ultrasound images and extracting vessel lumen from the images
US7978916B2 (en) System and method for identifying a vascular border
EP1690230B1 (en) Automatic multi-dimensional intravascular ultrasound image segmentation method
DE102018218751A1 (en) MACHINE-BASED WORKFLOW IN ULTRASONIC IMAGING
JP2022549669A (en) System and method for analyzing medical images based on spatio-temporal data
WO2004017823A2 (en) System and method for identifying a vascular border
CN115553816A (en) Portable three-dimensional carotid artery ultrasonic automatic diagnosis system and method
US20210208567A1 (en) Methods and systems for using three-dimensional (3d) model cuts based on anatomy for three-dimensional (3d) printing
KR20190125592A (en) 3D Blood Vessel Construction Method using medical images
Di Cosmo et al. Learning-based median nerve segmentation from ultrasound images for carpal tunnel syndrome evaluation
CN109965905B (en) Contrast region detection imaging method based on deep learning
CN116503607A (en) CT image segmentation method and system based on deep learning
Molinari et al. Accurate and automatic carotid plaque characterization in contrast enhanced 2-D ultrasound images
US11672503B2 (en) Systems and methods for detecting tissue and shear waves within the tissue
Ghosh et al. Breast lesion segmentation in ultrasound images using deep convolutional neural networks
CN114972266A (en) Lymphoma ultrasonic image semantic segmentation method based on self-attention mechanism and stable learning
CN111640127B (en) Accurate clinical diagnosis navigation method for orthopedics department
CN113744234A (en) Multi-modal brain image registration method based on GAN
JP7225345B1 (en) ULTRASOUND DIAGNOSTIC DEVICE AND DISPLAY METHOD
KR20190125793A (en) 3D Blood Vessel Construction Method using medical images
US20230316520A1 (en) Methods and systems to exclude pericardium in cardiac strain calculations
Lin et al. BSG-Net: A Blind Super-resolution Guided Network for Improving Ultrasound Image Segmentation
Sendra-Balcells et al. Generalisability of deep learning models in low-resource imaging settings: A fetal ultrasound study in 5 African countries
Han et al. Contrast Agent Removal for Brain CT Angiography Using Switchable CycleGAN with AdaIN and Histogram Equalization
Kumar et al. Review of Deep Learning Techniques Over Thyroid Ultrasound Image Segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination