CN112381164B - Ultrasound image classification method and device based on multi-branch attention mechanism - Google Patents

Ultrasound image classification method and device based on multi-branch attention mechanism Download PDF

Info

Publication number
CN112381164B
CN112381164B CN202011309889.3A CN202011309889A CN112381164B CN 112381164 B CN112381164 B CN 112381164B CN 202011309889 A CN202011309889 A CN 202011309889A CN 112381164 B CN112381164 B CN 112381164B
Authority
CN
China
Prior art keywords
network
feature extraction
image classification
extraction sub
ultrasonic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011309889.3A
Other languages
Chinese (zh)
Other versions
CN112381164A (en
Inventor
孟慧
牛建伟
李青锋
谷宁波
陈晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Innovation Research Institute of Beihang University
Original Assignee
Hangzhou Innovation Research Institute of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Innovation Research Institute of Beihang University filed Critical Hangzhou Innovation Research Institute of Beihang University
Priority to CN202011309889.3A priority Critical patent/CN112381164B/en
Publication of CN112381164A publication Critical patent/CN112381164A/en
Application granted granted Critical
Publication of CN112381164B publication Critical patent/CN112381164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The invention discloses an ultrasonic image classification method and device based on a multi-branch attention mechanism, which comprises the following steps: step S100: collecting ultrasonic section images at different angles and corresponding real labels to construct a training sample set; step S200: performing data enhancement and normalization processing operations on the ultrasound image collected in the step S100; step S300: building a multi-branch ultrasonic image classification network, wherein the network is formed by connecting a plurality of branch networks in parallel, and a multi-branch attention module is added in the network; step S400: training the network set up in step S300 by using a training sample set; s500: and (4) actual ultrasound image classification, namely classifying the ultrasound images by using the network trained in the step S400. The ultrasound image classification method comprehensively learns information in ultrasound sectional images at different angles, and realizes accurate and rapid ultrasound image classification.

Description

Ultrasound image classification method and device based on multi-branch attention mechanism
Technical Field
The invention belongs to an ultrasonic slice image classification technology, and particularly relates to an ultrasonic image classification method and device based on a multi-branch attention mechanism.
Background
Ultrasonic imaging is an important clinical imaging technology, has the advantages of no radiation, no pain, economy, applicability, real-time imaging and the like, and is widely applied to tumor screening, prenatal diagnosis, surgical navigation and the like. However, the ultrasonic imaging has the defects of poor imaging quality, large difference and the like, so that the ultrasonic diagnosis is seriously dependent on the experience of a doctor. Therefore, the ultrasonic slice image classification is carried out by means of the computer-aided diagnosis technology, so that doctors can be assisted in carrying out ultrasonic diagnosis, and the recall rate and the accuracy rate of the ultrasonic diagnosis are improved.
Computer-aided diagnosis techniques commonly used today include feature engineering-based methods and deep learning-based methods. Feature engineering based methods typically extract texture, morphology, model-based features, and descriptor-based features of the lesion region, and then perform classification of the ultrasound slice images based on the extracted features. The image characteristics utilized by the method have good medical interpretability, but the calculation of the characteristics usually needs manual intervention, which influences the real-time property and the objectivity of image analysis to a certain extent.
With the advent of the deep learning technology, the method based on deep learning can automatically extract the abstract features from the bottom layer to the high layer of the ultrasonic slice image by utilizing a neural network, and can realize the full-automatic image classification work. For example, patent application with application publication number CN111161273A discloses a medical ultrasound slice image segmentation method based on deep learning, and further, for example, patent application with application publication number CN110634125A discloses a fetal ultrasound slice image recognition method and system based on deep learning.
Disclosure of Invention
In view of the foregoing, an object of the present invention is to provide an ultrasound image classification method and apparatus based on a multi-branch attention mechanism, which can improve the accuracy of image classification results.
In order to achieve the purpose, the invention provides the following technical scheme:
in a first aspect, a method for classifying ultrasound images based on a multi-branch attention mechanism includes the following steps:
(1) acquiring ultrasonic slice images of the tested physiological tissues under multiple viewing angles, forming a sample by the multi-viewing-angle ultrasonic slice image of each tested physiological tissue and the corresponding real class mark, and preprocessing and data amplifying the sample to form a sample set;
(2) constructing an ultrasonic image classification network, wherein the ultrasonic image classification network comprises a plurality of feature extraction sub-networks and a mapping network, the number of the feature extraction sub-networks is the same as that of visual angles, each feature extraction sub-network is used for extracting a second feature map of an input visual angle ultrasonic slice image, each feature extraction sub-network comprises a feature extraction unit and an attention module, the feature extraction unit is used for extracting a first feature map of the input visual angle ultrasonic slice image, the attention module is used for extracting the feature maps from the feature extraction unit and calculating an attention weight matrix, the attention weight matrix is divided into two parts and respectively endowed to the first feature map output by the feature extraction unit and a second feature map output by an adjacent feature extraction sub-network, and the first feature map and the second feature map are weighted and summed with the respective divided attention weight matrix and then serve as second feature maps output by the feature extraction sub-networks belonging to the same network, the mapping network performs connection mapping on the input second characteristic diagram and outputs a prediction classification result;
(3) training the ultrasound image classification network by adopting a sample set to update network parameters, and after training is finished, forming an ultrasound image classification model by the determined network parameters and the ultrasound image classification network;
(4) and collecting the classified multi-view ultrasonic slice images, preprocessing the multi-view ultrasonic slice images, inputting the preprocessed multi-view ultrasonic slice images into an ultrasonic image classification model, and predicting the multi-view ultrasonic slice images by using the ultrasonic image classification model to obtain a prediction classification result.
The method based on deep learning relies on a large sample training data set, while the publicly available ultrasound slice image data sets are fewer, which limits the improvement of the ultrasound slice image classification accuracy. In order to train an efficient neural network on a limited data set, data augmentation processing needs to be performed on sample data, wherein the data augmentation mode includes horizontal flipping and rotation. Meanwhile, a real class label corresponding to the sample data is needed, and the real class label is determined by medical priori knowledge, so that the training speed of the deep learning network can be greatly increased and the robustness of the model can be improved by amplifying data and combining a classification label determined by the medical priori knowledge.
Considering that the multi-angle ultrasonic sectional view can provide more abundant measured physiological tissue information, the input of the ultrasonic image classification network is designed into the multi-angle ultrasonic sectional view. In consideration of redundant information existing in the multi-angle ultrasonic sectional view, a multi-branch attention module is constructed, and the learning burden of different network branches can be reduced by combining the characteristic information of different network branches in a layer-by-layer cascading mode, so that the problem of overfitting of the network is avoided. Compared with a neural network with single-view input, the multi-branch network based on multi-view input realizes higher classification accuracy.
Preferably, the first sub-network of feature extraction comprises only the feature extraction unit, the first feature map output by the feature extraction unit is used as the second feature map output by the sub-network of feature extraction, and the first sub-network of feature extraction is used as the neighboring sub-network of feature extraction of the second sub-network of feature extraction. And other feature extraction sub-networks simultaneously comprise a feature extraction unit and an attention module, the second feature extraction sub-network is used as an adjacent feature extraction sub-network of the third feature extraction sub-network, and the like, the adjacent feature extraction sub-networks are sequentially cascaded through an attention weight matrix, the second feature diagram output by the last feature extraction sub-network is directly input into a mapping network, and the classification prediction result is output through mapping calculation.
Preferably, the feature extraction unit includes a convolutional layer, a pooling layer, and a plurality of residual modules, the feature map output by the last residual module is the first feature map, and the input feature map of the last residual module is extracted as the input of the attention module for calculating the attention weight matrix. In some embodiments, the feature extraction unit comprises, in order, 1 convolutional layer, 1 pooling layer, and a plurality of connected residual modules. The residual module is composed of a plurality of convolution layers, at least 2 adjacent convolution layers can be used as a group, and the input of each group is simultaneously the output characteristic diagram and the input characteristic diagram of the previous group.
Preferably, the attention module includes a maximum pooling layer, an average pooling layer and a convolution layer, and the two pooled result maps obtained by processing the input feature map through the maximum pooling layer and the average pooling layer in parallel are spliced and input to the convolution layer, and are activated through convolution operation and a Sigmoid function, and then an attention weight matrix is output.
In order to adjust the contribution degree of the features to the final classification result, when the attention weight matrix is divided into two parts, a division ratio is set, the division ratio of the feature extraction sub-network to which the attention weight matrix belongs is larger than that of the adjacent feature extraction sub-networks, and the product of each division ratio and the attention weight matrix is used as the respective divided attention weight matrix. The self feature extraction sub-network and the adjacent feature extraction sub-networks are relative concepts, when a certain feature extraction sub-network is concerned, the concerned feature extraction sub-network is the self feature extraction sub-network, and the other feature extraction sub-networks are the adjacent feature extraction sub-networks. When the attention weight matrix calculated by the attention module of each branch is divided, the division ratio may be the same as or different from that of other branches.
Preferably, the mapping network employs a fully connected layer. The full connection layer as a mapping network can realize multi-task classification.
Preferably, in order to improve the training rate and the training result, when the ultrasound image classification network is trained, the excitation function is a Relu function, the optimization function is a random gradient descent method, and the loss function is a cross entropy loss function.
Preferably, the preprocessing comprises calculating the intensity mean and variance of all multi-view ultrasound slice images;
normalizing each multi-view ultrasonic slice image based on the intensity mean and the variance;
unifying the normalized multi-view ultrasonic slice images to a fixed size. Wherein the fixed dimensions may be equal in length and width.
An ultrasound image classification apparatus based on a multi-branch attention mechanism comprises a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor implements the ultrasound image classification method based on the multi-branch attention mechanism when executing the computer program.
Compared with the prior art, the ultrasound image classification method and device based on the multi-branch attention mechanism provided by the invention have the beneficial effects that at least:
(1) based on a theoretical framework of deep learning, the characteristics in the ultrasonic slice images are directly learned by using a neural network, and an end-to-end ultrasonic slice image classification task is realized;
(2) the multi-angle ultrasonic sectional image is used as network input, so that the problems of insufficient information and information deviation caused by single-view input are solved;
(3) a multi-branch attention module is adopted to learn the feature weights of different branch networks, so that redundant information is effectively filtered;
(4) the output of different network branches is integrated by utilizing a cascading mode, the learning burden of different branch networks is reduced, and the problem of network overfitting is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of an ultrasound image classification method based on a multi-branch attention mechanism according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an ultrasound image classification network according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In order to improve efficiency and accuracy of ultrasound image classification, the embodiment of the invention provides an ultrasound image classification method based on a multi-branch attention mechanism. Fig. 1 is a flowchart of an ultrasound image classification method based on a multi-branch attention mechanism according to an embodiment of the present invention. As shown in fig. 1, the ultrasound image classification method provided by the embodiment includes the following steps:
s100, collecting the ultrasonic slice images of the detected physiological tissues under multiple visual angles, and constructing a sample set.
In the embodiment, an ultrasonic probe is adopted to scan an imaging organ in a large range, a suspicious detected physiological tissue is searched, ultrasonic sectional images of the detected physiological tissue at different angles are acquired in a mode of rotating the probe, specifically, the probe can be rotated for 4 times in a clockwise direction for the detected physiological tissue, the rotating angle is 45 degrees each time, and the ultrasonic sectional images at 4 angles are acquired in sequence.
In the embodiment, the real class mark of the sample is determined by medical priori knowledge, the detected physiological tissues can be subjected to puncture biopsy under the guidance of ultrasound, the class of the detected physiological tissues is determined to serve as the real class mark of the detected physiological tissues based on the biopsy analysis result, and the multi-view ultrasound slice image of each detected physiological tissue and the corresponding real class mark form one sample.
In order to improve the quality and the quantity of the samples, the samples need to be preprocessed and data-augmented, and the specific process is as follows:
firstly, calculating the intensity mean value and standard deviation of all collected ultrasonic slice images, and normalizing the ultrasonic slice images based on the mean value and the standard deviation, wherein a specific calculation formula is as follows:
Figure BDA0002789451970000061
Figure BDA0002789451970000071
Figure BDA0002789451970000072
wherein, mu and delta are the mean value and standard deviation of the ultrasonic slice image respectively. i is the number of the ultrasound slice image, Ω is the set of the ultrasound slice images, H i And W i Height and width of the ultrasound slice image i, x, y representing the index of the pixel, v i (x, y) represents the intensity value of the ultrasound slice image i at position (x, y), and v' and v represent the normalized image intensity and the original image intensity, respectively.
Then, all the ultrasound slice images are unified into (S, S). The method comprises the steps of firstly scaling the long edge of an image to S by a method for keeping the length-width ratio, and then filling the short edge of the image to S by an image edge mean filling method.
And finally, performing sample expansion by adopting a horizontal overturning and rotating mode, wherein the ultrasonic slice image in each sample adopts the same transformation mode.
S200, constructing an ultrasonic image classification network.
Fig. 2 is a schematic structural diagram of an ultrasound image classification network according to an embodiment of the present invention. As shown in fig. 2, the ultrasound image classification network constructed by the embodiment is composed of 4 feature extraction sub-networks and a full connection layer. The input of the 4 feature extraction sub-networks are ultrasonic slice images of 4 visual angles of 1 sample, which are referred to as views 1-4 for short. The first sub-network corresponding to view 1 contains a feature extraction unit, and the other three sub-networks contain both a feature extraction unit and an attention module. The attention module is used for extracting the feature map from the feature extraction unit and calculating an attention weight matrix.
The feature extraction unit sequentially comprises 1 convolutional layer, 1 pooling layer and 4 residual error modules, a feature graph output by the last residual error module is a first feature graph, and an input feature graph of the last residual error module is extracted as input of the attention module and used for calculating an attention weight matrix.
The attention module comprises a maximum pooling layer, an average pooling layer and a convolution layer, wherein two pooling result graphs of an input characteristic graph processed by the maximum pooling layer and the average pooling layer in parallel are spliced and input into the convolution layer, and an attention weight matrix is output after convolution operation and Sigmoid function activation.
The attention weight matrix calculated by the attention module is divided into two parts and is respectively given to a first feature map output by the feature extraction unit and a second feature map output by an adjacent feature extraction sub-network, and the first feature map and the second feature map are weighted and summed with the respective divided attention weight matrices and then serve as second feature maps output by the feature extraction sub-network to which the attention module belongs. When the attention weight matrix is divided into two parts, a division ratio is set, the division ratio of the feature extraction sub-network to which the attention weight matrix belongs is larger than that of the adjacent feature extraction sub-network, and the product of each division ratio and the attention weight matrix is used as the respective divided attention weight matrix.
For example, the attention weight matrix calculated by the attention module included in the second sub-feature extraction sub-network corresponding to fig. 2 is divided into two parts, the division ratio is set to be alpha, the product of the attention weight matrix and the division ratio α is used as the attention weight matrix a of the second sub-network, and the product of the attention weight matrix and the division ratio α is used as the attention weight matrix a 'of the adjacent sub-network (i.e. the first sub-network corresponding to view 1), and the attention weight matrix a' and the weighted sum of the first feature map output by the feature extraction unit included in the second sub-network corresponding to view 2 and the second feature map output by the first sub-network corresponding to view 1 are used as the second feature map output by the second sub-network. According to the rule, a third feature extraction sub-network corresponding to the view 3 and a fourth feature extraction sub-network corresponding to the view 4 are analogized to obtain a second feature map output by the fourth feature extraction sub-network, the second feature map is directly input into a full connection layer, and a prediction classification result is output through mapping.
In this embodiment, the division ratios α, β, and γ are all greater than 0.5, so as to ensure that the feature information of the current input view is concerned more. The residual error module comprises a plurality of convolution layers, wherein every two convolution layers are used as a group, and the input of each group is the input characteristic diagram and the output characteristic diagram of the previous group.
And S300, training an ultrasonic image classification network by adopting a sample set to obtain an ultrasonic image classification model.
During training, a cross entropy loss function of a real mark of an input sample and a prediction classification result of the ultrasound image classification network is used as a loss function, a Relu function is used as an excitation function of each layer of the network, network parameters are updated by a random gradient descent method, and after training is finished, the determined network parameters and the ultrasound image classification network form an ultrasound image classification model.
And S400, classifying the ultrasonic images by using the ultrasonic image classification model.
And when the actual ultrasonic images are classified, acquiring the classified multi-view ultrasonic slice images, preprocessing the multi-view ultrasonic slice images, inputting the preprocessed multi-view ultrasonic slice images into an ultrasonic image classification model, and predicting the multi-view ultrasonic slice images by using the ultrasonic image classification model to obtain a prediction classification result. The preprocessing here includes normalization processing and resizing on the multi-view ultrasound slice image to (S, S) using the mean and standard deviation calculated at S100.
The embodiment also provides an ultrasound image classification device based on the multi-branch attention mechanism, which comprises a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor realizes the ultrasound image classification method based on the multi-branch attention mechanism when executing the computer program.
In practical applications, the computer memory may be volatile memory at the near end, such as RAM, or may be non-volatile memory, such as ROM, FLASH, floppy disk, mechanical hard disk, etc., or may be a remote storage cloud. The computer processor may be a Central Processing Unit (CPU), a microprocessor unit (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA), i.e., the ultrasound image classification step based on the multi-branch attention mechanism may be implemented by these processors.
The ultrasound image classification method and device based on the multi-branch attention mechanism provided by the embodiment are actually a data-driven ultrasound image classification method, and the method does not need to manually extract image features, utilizes a neural network to directly learn the features in the ultrasound image, and realizes an end-to-end ultrasound image classification task. The method adopts the multi-angle ultrasonic sectional image as network input, thereby avoiding the problems of insufficient information and information deviation caused by single-view input; the multi-branch attention module can learn the feature weights of different branch networks, effectively filters redundant information, improves the learning performance of the networks, and further improves the accuracy of ultrasound image classification.
It is noted that the word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The order of the steps is not limited to that listed above and may be varied or rearranged as desired, unless specifically stated or necessary to occur in sequence. The embodiments described above may be mixed and matched with each other or with other embodiments based on design and reliability considerations, i.e., technical features in different embodiments may be freely combined to form further embodiments.
So far, the embodiments of the present disclosure have been described in detail with reference to the accompanying drawings. It is to be noted that, in the attached drawings or in the description, the implementation modes not shown or described are all the modes known by the ordinary skilled person in the field of technology, and are not described in detail. In addition, the above definitions of the components and methods are not limited to the specific structures, shapes or manners mentioned in the embodiments, and those skilled in the art may easily modify or replace the components and methods and the like, which are included in the protection scope of the present invention.

Claims (9)

1. An ultrasound image classification method based on a multi-branch attention mechanism is characterized by comprising the following steps:
(1) acquiring ultrasonic slice images of the tested physiological tissues under multiple viewing angles, forming a sample by the multi-viewing-angle ultrasonic slice image of each tested physiological tissue and the corresponding real class mark, and preprocessing and data amplifying the sample to form a sample set;
(2) constructing an ultrasonic image classification network, wherein the ultrasonic image classification network comprises a plurality of feature extraction sub-networks and a mapping network, the number of the feature extraction sub-networks is the same as that of visual angles, each feature extraction sub-network is used for extracting a second feature map of an input visual angle ultrasonic slice image, a first feature extraction sub-network only comprises a feature extraction unit, and a first feature map output by the feature extraction unit is used as the second feature map output by the first feature extraction sub-network; the feature extraction sub-networks except the first feature extraction sub-network comprise a feature extraction unit and an attention module, wherein the feature extraction unit is used for extracting a first feature map of an input view ultrasonic slice image, the attention module is used for extracting the feature map from the feature extraction unit and calculating an attention weight matrix, the attention weight matrix is divided into two parts and is respectively given to the first feature map output by the feature extraction unit and a second feature map output by an adjacent feature extraction sub-network, and the first feature map and the second feature map are weighted and summed with the respective divided attention weight matrixes to serve as second feature maps output by the feature extraction sub-networks to which the feature extraction sub-networks belong;
wherein the first feature extraction sub-network serves as a neighboring feature extraction sub-network of the second feature extraction sub-network; the second feature extraction sub-network is used as a neighboring feature extraction sub-network of the third feature extraction sub-network; by analogy, cascade connection is sequentially carried out on adjacent feature extraction sub-networks through the attention weight matrix, a second feature map output by the last feature extraction sub-network is directly input into the mapping network, and a classification prediction result is output through mapping calculation;
(3) training the ultrasound image classification network by adopting a sample set to update network parameters, and after training is finished, forming an ultrasound image classification model by the determined network parameters and the ultrasound image classification network;
(4) and collecting the classified multi-view ultrasonic slice images, preprocessing the multi-view ultrasonic slice images, inputting the preprocessed multi-view ultrasonic slice images into an ultrasonic image classification model, and predicting the multi-view ultrasonic slice images by using the ultrasonic image classification model to obtain a prediction classification result.
2. The method for classifying ultrasound images based on multi-branch attention mechanism as claimed in claim 1, wherein the feature extraction unit comprises a convolution layer, a pooling layer and a plurality of residual modules, the feature map output by the last residual module is the first feature map, and the input feature map of the last residual module is extracted as the input of the attention module for calculating the attention weight matrix.
3. The method for classifying ultrasound images based on a multi-branch attention mechanism as claimed in claim 1 or 2, wherein the attention module comprises a maximum pooling layer, an average pooling layer and a convolution layer, two pooled result maps of the input feature map processed by the maximum pooling layer and the average pooling layer in parallel are spliced and input to the convolution layer, and the attention weight matrix is output after convolution operation and Sigmoid function activation.
4. The method for classifying ultrasound images based on a multi-branch attention mechanism according to claim 1, wherein when the attention weight matrix is divided into two parts, a division ratio is set, and the division ratio of the feature extraction sub-network to which the attention weight matrix belongs is larger than that of the adjacent feature extraction sub-network, and the product of the respective division ratios and the attention weight matrix is used as the respective divided attention weight matrices.
5. The method for ultrasound image classification based on the multi-branch attention mechanism of claim 1, wherein the mapping network employs a fully connected layer.
6. The ultrasound image classification method based on the multi-branch attention mechanism as claimed in claim 1, wherein, when training the ultrasound image classification network, the excitation function is a Relu function, the optimization function is a stochastic gradient descent method, and the loss function is a cross entropy loss function.
7. The method for classifying ultrasound images based on a multi-branch attention mechanism according to claim 1, wherein the preprocessing comprises calculating the intensity mean and variance of all multi-view ultrasound slice images;
normalizing each multi-view ultrasonic slice image based on the intensity mean and the variance;
unifying the normalized multi-view ultrasonic slice images to a fixed size.
8. The method for ultrasound image classification based on the multi-branch attention mechanism of claim 1, wherein the data augmentation mode includes horizontal flipping and rotation.
9. An ultrasound image classification apparatus based on a multi-branch attention mechanism, comprising a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein the computer processor when executing the computer program implements the method for classifying ultrasound images based on a multi-branch attention mechanism as claimed in any one of claims 1 to 8.
CN202011309889.3A 2020-11-20 2020-11-20 Ultrasound image classification method and device based on multi-branch attention mechanism Active CN112381164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011309889.3A CN112381164B (en) 2020-11-20 2020-11-20 Ultrasound image classification method and device based on multi-branch attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011309889.3A CN112381164B (en) 2020-11-20 2020-11-20 Ultrasound image classification method and device based on multi-branch attention mechanism

Publications (2)

Publication Number Publication Date
CN112381164A CN112381164A (en) 2021-02-19
CN112381164B true CN112381164B (en) 2022-09-20

Family

ID=74584440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011309889.3A Active CN112381164B (en) 2020-11-20 2020-11-20 Ultrasound image classification method and device based on multi-branch attention mechanism

Country Status (1)

Country Link
CN (1) CN112381164B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861443B (en) * 2021-03-11 2022-08-30 合肥工业大学 Advanced learning fault diagnosis method integrated with priori knowledge
CN113392655A (en) * 2021-06-08 2021-09-14 沈阳雅译网络技术有限公司 Method for accelerating translation model training speed based on multi-branch network structure
CN113642611B (en) * 2021-07-16 2024-04-12 重庆邮电大学 Fetal heart ultrasonic image identification method based on multiple granularities
CN114550313A (en) * 2022-02-18 2022-05-27 北京百度网讯科技有限公司 Image processing method, neural network, and training method, device, and medium thereof
CN117392124B (en) * 2023-12-08 2024-02-13 山东大学 Medical ultrasonic image grading method, system, server, medium and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9171130B2 (en) * 2009-11-24 2015-10-27 Penrad Technologies, Inc. Multiple modality mammography image gallery and clipping system
CN110164550B (en) * 2019-05-22 2021-07-09 杭州电子科技大学 Congenital heart disease auxiliary diagnosis method based on multi-view cooperative relationship
CN110443143B (en) * 2019-07-09 2020-12-18 武汉科技大学 Multi-branch convolutional neural network fused remote sensing image scene classification method
CN110992270A (en) * 2019-12-19 2020-04-10 西南石油大学 Multi-scale residual attention network image super-resolution reconstruction method based on attention
CN111598108A (en) * 2020-04-22 2020-08-28 南开大学 Rapid salient object detection method of multi-scale neural network based on three-dimensional attention control
CN111444889B (en) * 2020-04-30 2023-07-25 南京大学 Fine granularity action detection method of convolutional neural network based on multistage condition influence

Also Published As

Publication number Publication date
CN112381164A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN112381164B (en) Ultrasound image classification method and device based on multi-branch attention mechanism
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN108830326B (en) Automatic segmentation method and device for MRI (magnetic resonance imaging) image
CN111985536B (en) Based on weak supervised learning gastroscopic pathology image Classification method
CN112529894B (en) Thyroid nodule diagnosis method based on deep learning network
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN105931226A (en) Automatic cell detection and segmentation method based on deep learning and using adaptive ellipse fitting
WO2020038240A1 (en) Image processing method and apparatus, computer-readable storage medium and computer device
Nurmaini et al. Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
CN113450328B (en) Medical image key point detection method and system based on improved neural network
CN111415728A (en) CT image data automatic classification method and device based on CNN and GAN
CN116681958B (en) Fetal lung ultrasonic image maturity prediction method based on machine learning
CN112263217B (en) Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
Yonekura et al. Improving the generalization of disease stage classification with deep CNN for glioma histopathological images
Cao et al. An automatic breast cancer grading method in histopathological images based on pixel-, object-, and semantic-level features
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN112381846A (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
CN111275712A (en) Residual semantic network training method oriented to large-scale image data
CN115546605A (en) Training method and device based on image labeling and segmentation model
CN112750137A (en) Liver tumor segmentation method and system based on deep learning
CN116740435A (en) Breast cancer ultrasonic image classifying method based on multi-mode deep learning image group science
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant