CN117392124A - Medical ultrasonic image grading method, system, server, medium and device - Google Patents

Medical ultrasonic image grading method, system, server, medium and device Download PDF

Info

Publication number
CN117392124A
CN117392124A CN202311674642.5A CN202311674642A CN117392124A CN 117392124 A CN117392124 A CN 117392124A CN 202311674642 A CN202311674642 A CN 202311674642A CN 117392124 A CN117392124 A CN 117392124A
Authority
CN
China
Prior art keywords
grading
image
model
medical
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311674642.5A
Other languages
Chinese (zh)
Other versions
CN117392124B (en
Inventor
马金连
景欣
刘村
焦军燕
严奇琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute Of Shandong University
Shandong University
Original Assignee
Shenzhen Research Institute Of Shandong University
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute Of Shandong University, Shandong University filed Critical Shenzhen Research Institute Of Shandong University
Priority to CN202311674642.5A priority Critical patent/CN117392124B/en
Publication of CN117392124A publication Critical patent/CN117392124A/en
Application granted granted Critical
Publication of CN117392124B publication Critical patent/CN117392124B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention belongs to the field of artificial intelligence deep learning, and provides a medical ultrasonic image grading method, a system, a server, a medium and equipment, wherein the grading method comprises the following steps: acquiring a medical ultrasound image and an RGB image dataset; pre-training the constructed grading model based on the RGB image data set to obtain a first grading model, and transferring the first grading model to a medical ultrasonic image for fine tuning training to obtain a second grading model; and obtaining a grading result based on the medical ultrasonic image to be graded and the grading model. The invention can adaptively weight different channels and spatial positions through the use of an attention mechanism, solves the problem of unbalanced class, improves the classification accuracy, finally outputs the classification grade of the image by the system, can provide auxiliary reference opinion for doctors, and improves the accuracy and the efficiency of clinical diagnosis.

Description

Medical ultrasonic image grading method, system, server, medium and device
Technical Field
The invention belongs to the field of artificial intelligence deep learning, and particularly relates to a medical ultrasonic image grading method, a system, a server, a medium and equipment.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The artificial intelligence classification based on medical images refers to a method for analyzing and diagnosing medical images by using computer vision technology and artificial intelligence algorithm. Medical imaging includes many types of X-ray, CT, MRI, ultrasound, etc., and analysis and diagnosis of medical imaging plays an important role in early detection and treatment of disease. In medical image classification, the artificial intelligence technology can greatly improve the accuracy and efficiency of diagnosis, shorten the diagnosis time and help doctors to make more accurate diagnosis and treatment decisions. At the same time, medical image classification techniques also face problems and challenges, such as: insufficient data volume, unbalanced data, lack of labeling information, etc., further improvement and improvement are required.
The inventors have found that current pathological examinations require surgical or puncture sampling procedures, e.g. fatty liver is a common metabolic disorder. Currently, the gold standard for fatty liver fractionation is liver histopathological examination. The pathological examination is to obtain liver tissue specimens, and to determine the information on fat content, fibrosis degree, inflammatory response and the like of the liver through observation and evaluation under a microscope, and the invasive diagnosis method has a certain traumatism and risk to patients, is expensive and is not suitable for large-scale screening or dynamic monitoring. Therefore, developing non-invasive diagnostic techniques based on medical images is a hotspot and difficulty of current research.
The main clinical technologies for pathological examination are medical imaging technologies such as ultrasonic imaging, CT and MRI, but the technologies have the problems of strong subjectivity of operators, complicated operation flow, high diagnosis cost, inaccurate grading result, non-standardization and the like.
Disclosure of Invention
In order to solve at least one technical problem in the background art, the invention provides a medical ultrasonic image grading method, a system, a server, a medium and equipment, which realize automatic grading of medical ultrasonic images by using a deep convolutional neural network model and improve diagnosis efficiency of doctors.
According to a first aspect of the present disclosure, there is provided a medical ultrasound image grading method, the method comprising:
acquiring a medical ultrasound image and an RGB image dataset;
pre-training the constructed grading model based on the RGB image data set to obtain a first grading model, and transferring the first grading model to a medical ultrasonic image for fine tuning training to obtain a second grading model;
the construction process of the hierarchical model comprises the following steps: introducing a residual block and an attention mechanism on the basis of a ResNeAt network structure, adding a channel attention module and a space attention module before a first residual block and after a last residual block, obtaining a first feature map through the channel attention module, obtaining a second feature map through the first feature map through the space attention module and through the feature extraction in the horizontal direction and the vertical direction, weighting the first feature map and the second feature map, and outputting a classification result through a classification layer;
and obtaining a grading result based on the medical ultrasonic image to be graded and the grading model.
In some embodiments, after the medical ultrasound image is acquired, preprocessing of the medical ultrasound image includes black frame removal, image graying processing, local histogram adaptive equalization, image enhancement processing, and data partitioning processing.
In some embodiments, the medical ultrasound image is divided into a training set, a verification set and a test set, the stored first hierarchical model is migrated to the training set and the verification set for retraining and verification to obtain a second hierarchical model, and the second hierarchical model is tested based on the test set until the use standard is reached.
In some embodiments, a cosine annealing learning rate adjustment algorithm is used to adjust the learning rate when pre-training the classification model.
In some embodiments, after the second classification model is obtained, the second classification model is cross-validated, and the average and variance of the accuracy of the cross-validation is calculated.
In some embodiments, the ResNeAt network includes four different numbers of residual blocks, each residual block consisting of one depth separable convolutional Layer, two normal convolution and one Layer Normalization (Layer Norm) Layer, three downsampling layers consisting of one Layer Norm Layer and one normal convolutional Layer, a attention module, and a classification Layer consisting of one global average pooling Layer, one Layer Norm Layer, and one full join Layer.
According to a second aspect of the present invention, there is provided a server comprising an acquisition module, a training module and a grading module, the acquisition module configured to acquire medical ultrasound images and RGB image datasets; the training module is configured to pretrain the constructed grading model based on the RGB image data set to obtain a first grading model, and migrate the first grading model to the medical ultrasonic image for fine tuning training to obtain a second grading model; and the grading module is configured to obtain grading results based on the medical ultrasonic image to be graded and the grading model.
According to a third aspect of the present invention, there is provided a depth model-based medical ultrasound image grading system comprising an image acquisition device and an image processing device; the image acquisition device is used for acquiring medical ultrasonic images and RGB image data sets and sending the medical ultrasonic images and the RGB image data sets to the medical processing device; the image processing apparatus is for performing the method of any one of the above aspects.
According to a fourth aspect of the present invention there is provided a non-transitory computer readable medium of computer program instructions which, when executed by a processor, cause the processor to perform a method as in any of the above aspects.
According to a fifth aspect of the present invention there is provided a computer device comprising a processor and a memory having stored thereon a computer program configured to, when executed on the processor, perform a method as any of the above aspects.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an ultrasonic image intelligent grading method and system based on a depth model, which realizes automatic grading of ultrasonic images based on a depth convolutional neural network model, wherein a constructed ResNeAt network contains residual blocks (residual blocks) and Convolutional Block Attention Module (CBAM) attention mechanisms, a CBAM attention module is added before the first residual block and after the last residual block, the characteristics of each channel are multiplied by corresponding weights, the characteristics of each channel are input into a space attention module through the characteristic diagram of the channel attention module for two times, GAP in the horizontal direction and GAP in the vertical direction respectively, and the characteristic diagram processed through the channel attention mechanism and the space attention mechanism are multiplied by corresponding weight coefficients, so that a final characteristic diagram is obtained, different channels and space positions are weighted in a self-adaptive manner, the network can focus on the most important characteristics in the channels and the space, the attention degree of the network to important characteristics in the images is improved, and the performance of the model is improved; the problems of network gradient explosion and data unbalance are effectively solved. Finally, the high-efficiency, accurate, low-cost and high-repeatability hierarchical diagnosis of diseases such as fatty liver based on the ultrasonic images is realized, and the method has wide application prospect.
2. The hierarchical model of the invention adopts Layer Normalization (Layer Norm), compared with Batch Normalization, the Layer Norm has more stable performance on small samples, and the Layer Norm normalizes each characteristic of each sample instead of the whole batch, so that the hierarchical model is very effective for inhibiting the gradient vanishing problem when training a depth network; meanwhile, the grading model uses a cosine annealing learning rate (CosineAnneanlingLR) adjustment algorithm to reduce the learning rate, and uses improved Focal Loss as a Loss function to reduce the influence caused by data unbalance.
3. The invention adopts images acquired by various devices, has large image variability and establishes a universal and efficient grading model.
4. The hierarchical model adopts the depth separable convolution, firstly the depth space convolution is applied, and then the point-by-point convolution is applied, so that the parameter quantity can be greatly reduced, and the speed of the model is improved.
These and other advantages of the present disclosure will become apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Embodiments of the present disclosure will now be described in more detail and with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a method for classifying medical ultrasound images based on a depth model provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a ResNeAt network structure provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a server for ranking medical ultrasound images provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a depth model based medical ultrasound image grading system provided by an embodiment of the present invention;
FIG. 5 is an example system that includes an example computing device that represents one or more systems and/or devices that may implement the various techniques described herein.
Detailed Description
The following description provides specific details for a thorough understanding and implementation of various embodiments of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these details. In some instances, well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the disclosure. The terminology used in the present disclosure is to be understood in its broadest reasonable manner, even though it is being used in conjunction with a particular embodiment of the present disclosure.
The invention uses a data enhancement RandAugment method to increase the diversity and quantity of data aiming at the problem of small samples; the invention can adaptively weight different channels and spatial positions through the use of a attention mechanism, solves the problem of unbalanced class, improves the classification accuracy, outputs lesion level finally, can provide auxiliary reference opinion for doctors and improves the accuracy and efficiency of clinical diagnosis.
Fig. 1 schematically illustrates an ultrasound image intelligent classification method 100 based on a depth model, comprising the steps of:
step 101: acquiring data and an ultrasonic image;
step 102: preprocessing data, namely preprocessing an ultrasonic image and dividing the ultrasonic image into three parts, namely a training set, a verification set and a test set;
step 103: the hierarchical model is built, a residual block (residual block) and a Convolutional Block Attention Module (CBAM) attention mechanism are added based on a ResNeAt network model, a cosine annealing learning rate (CosineAnnealing LR) adjustment algorithm is used for reducing the learning rate, and an improved Focal Loss is used as a Loss function for reducing the influence caused by data unbalance;
step 104: performing transfer learning, namely pre-training the constructed ResNeAt network by adopting an ImageNet data set, and transferring the obtained model to the preprocessed data set for fine tuning training, so as to obtain a final ultrasonic image grading model;
step 105: dividing the images of the training set and the verification set into five parts, wherein each part is used as the verification set, the remaining four parts are used as the training set, and performing five times of cross verification through the hierarchical model obtained in the step 104, so that the generalization capability of the model is proved;
step 106: inputting the test set image into the trained ResNeAt network to obtain a grading result, and testing whether the network meets the clinical use standard.
In this example, the classification of fatty liver in liver histopathological examination is taken as an example;
in the data acquisition module, the ultrasonic image is an abdomen ultrasonic image and is acquired through four different devices of GE Healthcare, SAMSUNG, TOSHIBA and HITACHI;
and obtaining 2000 images of normal, mild and moderate fatty liver respectively, and 500 images of severe fatty liver by marking the images with different resolutions of 720-576-1920 and the images by an imaging doctor.
The data preprocessing process comprises the following steps:
step 201: removing poor quality images with a large number of black areas in the collected ultrasonic images;
step 202: cutting off a black frame in each ultrasonic image, and only reserving a middle sector area;
step 203: graying treatment is carried out on all the images;
step 204: the local histogram self-adaptive equalization is used for all images, so that the contrast ratio is enhanced;
step 205: 2000 severe fatty liver images obtained after enhancement by using an open source data enhancement algorithm RandAugment are specifically: rotating the severe fatty liver ultrasonic image by 90 degrees/180 degrees/270 degrees, mirroring the vertical and horizontal directions and the like;
step 206: all the processed images were processed according to 8:1: the scale of 1 is divided into three parts of training set, verification set and test set.
In step 103, the ResNeAt network structure constructed in step 103 is shown in FIG. 2 and consists of four different numbers of residual blocks, three downsampling layers, an attention module and a final classification layer. Each residual block consists of a depth separable convolution Layer, two common convolution layers and a Layer Norm Layer, and the convolution kernels are respectively of the size,/>,/>The step length is 1, and the number of the four residual blocks is 3, 9 and 3 respectively; the downsampling Layer consists of a Layer Norm Layer and a common convolution Layer, and the convolution kernel size is 2 +.>2, the step length is 2; the classification Layer consists of a global average pooling Layer, a Layer Norm Layer and a full connection Layer; convolutional Block Attention Module (CBAM) attention modules, including a channel attention module and a spatial attention module, are added before the first and after the last residual block, using cosine annealing learning rate (cosineAnneanlingLR) adjustment algorithm to reduce the learning rate, and using improved Focal Loss as a Loss function to reduce the effects of data imbalance.
Specifically, the ultrasonic image data is processed by the model as follows:
firstly, the image (with the size of H x W x C, H is height, W is width, C is channel number) obtained in step 102 is input into a channel attention module of a first CBAM attention module, a vector (each element represents an average activation value of a channel) with the size of C is obtained by carrying out pooling operation on each channel through global average pooling (Global Average Pooling, GAP), then the obtained vector is mapped into two vectors through a Full Connected (FC) layer, the maximum activation value and the average activation value of the channel are respectively represented, then Softmax activation is carried out on the two vectors respectively to obtain corresponding weight coefficients, and the two weight coefficients are respectively applied to an original feature map, namely, the two weight coefficients are respectively multiplied with features of the corresponding channel.
The feature map of the channel attention module is input into the space attention module to be processed twice, namely GAP in the horizontal direction and GAP in the vertical direction, two vectors with the sizes of H x 1 and 1 x W x 1 are obtained, and the obtained two vectors are mapped through two full connection layers respectively. And then, carrying out Softmax activation on the two vectors to obtain corresponding weight coefficients, wherein the weight coefficients represent importance in the horizontal and vertical directions. The two weight coefficients are applied to the original feature map separately, i.e. the two weight coefficients are multiplied by the features of the corresponding pixels. Finally, the feature graphs processed by the channel attention mechanism and the space attention mechanism are multiplied to obtain a final feature graph, so that the network can focus on the most important features on the channel and the space.
The feature map obtained by the first CBAM attention module is input into a residual block 1, wherein the residual block 1 consists of a depth separable convolution Layer, two common convolution layers and a Layer Norm Layer, and the convolution kernels are respectively of the size of,/>,/>The steps are all 1, and the residual block 1 is repeated 3 times. Then input into a downsampling Layer which consists of a Layer Norm Layer and a common convolution Layer, the convolution kernel size is 2 +.>2, the step length is 2. Then input to the residual block 2, the residual block 2 is the same as the residual block 1 in structure, and after repeating for 3 times, the residual block 3 is input to the residual block 3 through the downsampling layer, and the residual block 3 is the same as the residual block 1 in structure, after repeating for 9 times, the residual block 4 is input to the residual block 4 through the downsampling layer, and after repeating for 3 times, the residual block 4 is input to the second CBAM attention module, and the same operation as the first CBAM attention module is performed, and finally the classification layer is input. The classification Layer consists of a global average pooling Layer, a Layer Norm Layer and a full connection Layer, and outputs the final classification probability. The use of cosine annealing learning rate (CosineAnnealing LR) adjustment algorithm to reduce learning rate, package-specificThe method comprises the following steps:
the algorithm that the learning rate starts from a larger value and then gradually decreases in the form of cosine function can automatically adjust the learning rate during training, and improves the generalization capability of the model, and the principle is as follows:wherein i indicates how many runs, < ->And->Respectively representing the maximum value and the minimum value of the learning rate, defining the range of the learning rate, ++>Then it is indicated how many epochs are currently performed, but +.>Is updated after each batch run, and one epoch is not yet executed, so +.>The value of (2) may be a fraction,/>Indicating how many epochs in total need to be trained for the ith restart. Thus, in one period, the learning rate will be reduced from +.>Reduced to +.>And then proceeds to the next cycle.
The pre-training of the constructed ResNeAt network by adopting the ImageNet data set comprises the following steps:
step 401: firstly, training the built ResNeAt network in advance by adopting an ImageNet data set, and storing a model after n training iterations; in this embodiment, n is preferably 10000.
Step 402: and (3) transferring the model stored in the step 401 to the training set and the verification set which are preprocessed in the step 2 for retraining and verification, so as to obtain a final ultrasonic image grading model.
In the step 5, the images of the training set and the verification set are equally divided into five parts, wherein each part is used as the verification set, the remaining four parts are used as the training set, and the five times of cross verification are carried out through the grading model obtained in the step 4; and calculating the average value of the five times of accuracy and variance, thereby proving the generalization capability of the model.
In the step 106, the test set image preprocessed in the step 2 is used as input to obtain a ResNeAt network classification result;
each hierarchical evaluation index is calculated, including Accuracy (Accuracy), precision (Precision), recall (Recall), specificity (Specificity), F1-score, and whether the network meets the clinical use criteria is tested.
The step 106 is a step of classifying the evaluation index, specifically, the evaluation formula is TP is predicted as a positive example, and is actually the positive example; TN represents a predicted negative example, and is actually a negative example; FP represents a predicted positive case, and an actual negative case; FN represents a predicted positive case and an actual negative case.
Fig. 3 schematically illustrates a schematic diagram of a server 500 for ranking medical ultrasound images according to one embodiment of the present disclosure. The server 500 includes an acquisition module 501, a training module 502, and a ranking module 503. The acquisition module 501 is configured to acquire medical ultrasound images and RGB image datasets. The training module 502 is configured to pretrain the constructed hierarchical model based on the RGB image dataset to obtain a first hierarchical model, and migrate the first hierarchical model to the medical ultrasound image for fine tuning training to obtain a second hierarchical model. The ranking module 503 is configured to obtain a ranking result based on the medical ultrasound images to be ranked and the ranking model fig. 4 schematically shows a schematic diagram of a depth model based medical ultrasound image ranking system 600 according to one embodiment of the present disclosure.
FIG. 5 illustrates an example system 700 that includes an example computing device 710 that represents one or more systems and/or devices that can implement the various techniques described herein. Computing device 710 may be, for example, a server of a service provider, a device associated with a client (e.g., a client device), a system-on-chip, and/or any other suitable computing device or computing system. The server 500 for classifying medical images described above with respect to fig. 4 may take the form of a computing device 710. Alternatively, the server 500 for classifying medical images may be implemented as a computer program in the form of a medical ultrasound image classification application 716.
The example computing device 710 as illustrated includes a processing system 711, one or more computer-readable media 712, and one or more I/O interfaces 713 communicatively coupled to each other. Although not shown, computing device 710 may also include a system bus or other data and command transfer system that couples the various components to one another. A system bus may include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. Various other examples are also contemplated, such as control and data lines.
The processing system 711 is representative of functionality to perform one or more operations using hardware. Thus, the processing system 711 is illustrated as including hardware elements 714 that may be configured as processors, functional blocks, and the like. This may include implementation in hardware as application specific integrated circuits or other logic devices formed using one or more semiconductors. The hardware element 714 is not limited by the material from which it is formed or the processing mechanism employed therein. For example, the processor may be comprised of semiconductor(s) and/or transistors (e.g., electronic Integrated Circuits (ICs)). In such a context, the processor-executable instructions may be electronically-executable instructions.
Computer-readable medium 712 is illustrated as including memory/storage 715. Memory/storage 715 represents memory/storage capacity associated with one or more computer-readable media. Memory/storage 315 may include volatile media (such as Random Access Memory (RAM)) and/or nonvolatile media (such as Read Only Memory (ROM), flash memory, optical disks, magnetic disks, and so forth). The memory/storage 715 may include fixed media (e.g., RAM, ROM, a fixed hard drive, etc.) and removable media (e.g., flash memory, a removable hard drive, an optical disk, and so forth). The computer-readable medium 712 may be configured in a variety of other ways as described further below.
One or more I/O interfaces 713 represent functionality that allows a user to input commands and information to computing device 710, and optionally also allows information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include keyboards, cursor control devices (e.g., mice), microphones (e.g., for voice input), scanners, touch functions (e.g., capacitive or other sensors configured to detect physical touches), cameras (e.g., motion that does not involve touches may be detected as gestures using visible or invisible wavelengths such as infrared frequencies), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, a haptic response device, and so forth. Thus, computing device
710 may be configured in various ways to support user interaction as further described below.
Computing device 710 also includes medical ultrasound image classification 716. Medical ultrasound image classification 716 may be, for example, a software instance of server 500 for classifying medical images described with respect to fig. 5, and implement the techniques described herein in combination with other elements in computing device 710.
Various techniques may be described herein in the general context of software hardware elements or program modules. Generally, these modules include routines, programs, objects, elements, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The terms "module," "functionality," and "component" as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of computing platforms having a variety of processors.
An implementation of the described modules and techniques may be stored on or transmitted across some form of computer readable media. Computer-readable media can include a variety of media that are accessible by computing device 710. By way of example, and not limitation, computer readable media may comprise "computer readable storage media" and "computer readable signal media". "computer-readable storage medium" refers to a medium and/or device that can permanently store information and/or a tangible storage device, as opposed to a mere signal transmission, carrier wave, or signal itself. Thus, computer-readable storage media refers to non-signal bearing media. Computer-readable storage media includes, for example, volatile and nonvolatile, removable and non-removable media and/or adapted to store information such as computer-readable instructions, data structures, program modules, logic elements/circuits, or other numbers
Storage device or the like implemented by the method or technique). Examples of a computer-readable storage medium may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, hard disk, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage devices, tangible media, or articles of manufacture adapted to store the desired information and which may be accessed by a computer.
"computer-readable signal medium" refers to a signal bearing medium configured to transmit instructions to hardware of computing device 710, such as via a network. Signal media may typically be embodied in a modulated data signal, such as a carrier wave, data signal, or other transport mechanism, with computer readable instructions, data structures, program modules, or other data. Signal media also include any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
As previously described, hardware elements 714 and computer-readable media 712 represent instructions, modules, programmable device logic, and/or fixed device logic implemented in hardware that may be used in some embodiments to implement at least some aspects of the techniques described herein. The hardware elements may include integrated circuits or components of a system on a chip, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), complex Programmable Logic Devices (CPLDs), and other implementations in silicon or other hardware devices.
In this context, the hardware elements may be implemented as processing devices that perform program tasks defined by instructions, modules, and/or logic embodied by the hardware elements, as well as hardware devices that store instructions for execution, such as the previously described computer-readable storage media.
Combinations of the foregoing may also be used to implement the various techniques and modules described herein. Accordingly, software, hardware, or program modules, and other program modules may be implemented as one or more instructions and/or logic embodied on some form of computer readable storage medium and/or by one or more hardware elements 714. Computing device 710 may be configured to implement particular instructions and/or functions corresponding to software and/or hardware modules. Thus, for example, by using the computer-readable storage medium of the processing system and/or the hardware elements 714, the modules may be implemented at least in part in hardware as modules executable by the computing device 710 as software. The instructions and/or functions may be executable/operable by one or more articles of manufacture (e.g., one or more computing devices 710 and/or processing systems 711) to implement the techniques, modules, and examples described herein.
In various implementations, computing device 710 may take on a variety of different configurations. For example, computing device 710 may be implemented as a computer-like device including a personal computer, desktop computer, multi-screen computer, laptop computer, netbook, and the like. Computing device 710 may also be implemented as a mobile appliance-like device including mobile devices such as mobile phones, portable music players, portable gaming devices, tablet computers, multi-screen computers, and the like. Computing device 710 may also be implemented as a television-like device that includes devices having or connected to generally larger screens in casual viewing environments. Such devices include televisions, set-top boxes, gaming machines, and the like.
The techniques described herein may be supported by these various configurations of computing device 710 and are not limited to the specific examples of techniques described herein. The functionality may also be implemented in whole or in part on the "cloud" 720 through the use of a distributed system, such as through platform 722 as described below.
Cloud 720 includes and/or is representative of platform 722 for resource 724. Platform 722 abstracts underlying functionality of hardware (e.g., servers) and software resources of cloud 720. The resources 724 may include applications and/or data that may be used when executing computer processing on servers remote from the computing device 710. The resources 724 may also include services provided over the internet and/or over subscriber networks such as cellular or Wi-Fi networks.
Platform 722 may abstract resources and functionality to connect computing device 710 with other computing devices. Platform 722 may also be used to abstract a hierarchy of resources to provide a corresponding level of hierarchy of encountered demand for resources 324 implemented via platform 722. Thus, in an interconnect device embodiment, implementation of the functionality described herein may be distributed throughout system 700.
For example, the functionality may be implemented in part on computing device 710 and by platform 722 abstracting the functionality of cloud 720.
It should be understood that for clarity, embodiments of the present disclosure have been described with reference to different functional modules. However, it will be apparent that the functionality of each functional module may be implemented in a single module, in a plurality of modules, or as part of other functional modules without departing from the present disclosure. For example, functionality illustrated to be performed by a single module may be performed by multiple different modules. Thus, references to specific functional blocks are only to be seen as references to suitable blocks for providing the described functionality rather than indicative of a strict logical or physical structure or organization. Thus, the present disclosure may be implemented in a single module or may be physically and functionally distributed between different modules and circuits.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various devices, elements, or components, these devices, elements, or components should not be limited by these terms. These terms are only used to distinguish one device, element, or component from another device, element, or component.
Although the present disclosure has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present disclosure is limited only by the appended claims. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. The order of features in the claims does not imply any specific order in which the features must be worked. Furthermore, in the claims, the word "comprising" does not exclude other elements, and the indefinite article "a" or "an" does not exclude a plurality. Reference signs in the claims are provided merely as a clarifying example and shall not be construed as limiting the scope of the claims in any way.

Claims (10)

1. A medical ultrasound image grading method, comprising the steps of:
acquiring a medical ultrasound image and an RGB image dataset;
pre-training the constructed grading model based on the RGB image data set to obtain a first grading model, and transferring the first grading model to a medical ultrasonic image for fine tuning training to obtain a second grading model;
the construction process of the hierarchical model comprises the following steps: introducing a residual block and an attention mechanism on the basis of a ResNeAt network structure, adding a channel attention module and a space attention module before a first residual block and after a last residual block, obtaining a first feature map through the channel attention module, obtaining a second feature map through the first feature map through the space attention module and through the feature extraction in the horizontal direction and the vertical direction, weighting the first feature map and the second feature map, and outputting a classification result through a classification layer;
and obtaining a grading result based on the medical ultrasonic image to be graded and the grading model.
2. A medical ultrasound image grading method according to claim 1, wherein the preprocessing of the medical ultrasound image after the acquisition of the medical ultrasound image includes the removal of black frames, the image graying processing, the local histogram adaptive equalization, the image enhancement processing, and the data dividing processing.
3. A medical ultrasound image grading method according to claim 1, wherein the medical ultrasound image is divided into a training set, a validation set and a test set, the stored first grading model is migrated to the training set and the validation set for retraining and validation to obtain a second grading model, and the second grading model is tested based on the test set until the usage standard is reached.
4. A medical ultrasound image grading method according to claim 1, wherein the learning rate is adjusted using a cosine annealing learning rate adjustment algorithm when pre-training the grading model.
5. A medical ultrasound image grading method according to claim 1, wherein after the second grading model is obtained, the second grading model is cross-validated, and the average and variance of the accuracy obtained by the cross-validation are calculated.
6. A medical ultrasound image grading method according to claim 1, wherein the resneuat network comprises four different numbers of residual blocks, three downsampling layers, an attention module and a classification Layer, each residual block consisting of a depth separable convolution Layer, two normal convolutions and a Layer Norm Layer, the downsampling layers consisting of a Layer Norm Layer and a normal convolution Layer, the classification Layer consisting of a global averaging pooling Layer, a Layer Norm Layer and a fully connected Layer.
7. A server, comprising:
an acquisition module configured to acquire a medical ultrasound image and an RGB image dataset;
the training module is configured to pretrain the constructed grading model based on the RGB image data set to obtain a first grading model, and migrate the first grading model to the medical ultrasonic image for fine tuning training to obtain a second grading model;
the construction process of the hierarchical model comprises the following steps: introducing a residual block and an attention mechanism on the basis of a ResNeAt network structure, adding a channel attention module and a space attention module before a first residual block and after a last residual block, obtaining a first feature map through the channel attention module, obtaining a second feature map through the first feature map through the space attention module and through the feature extraction in the horizontal direction and the vertical direction, weighting the first feature map and the second feature map, and outputting a classification result through a classification layer;
and the grading module is configured to obtain grading results based on the medical ultrasonic image to be graded and the grading model.
8. A medical ultrasound image grading system, comprising an image acquisition device and an image processing device;
the image acquisition device is used for acquiring medical ultrasonic images and RGB image data sets and sending the medical ultrasonic images and the RGB image data sets to the medical processing device;
the image processing apparatus is configured to perform the following operations:
pre-training the constructed grading model based on the RGB image data set to obtain a first grading model, and transferring the first grading model to a medical ultrasonic image for fine tuning training to obtain a second grading model;
the construction process of the hierarchical model comprises the following steps: introducing a residual block and an attention mechanism on the basis of a ResNeAt network structure, adding a channel attention module and a space attention module before a first residual block and after a last residual block, obtaining a first feature map through the channel attention module, obtaining a second feature map through the first feature map through the space attention module and through the feature extraction in the horizontal direction and the vertical direction, weighting the first feature map and the second feature map, and outputting a classification result through a classification layer;
and obtaining a grading result based on the medical ultrasonic image to be graded and the grading model.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of a medical ultrasound image grading method according to any of claims 1-6.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of a medical ultrasound image grading method according to any of claims 1-6 when the program is executed by the processor.
CN202311674642.5A 2023-12-08 2023-12-08 Medical ultrasonic image grading method, system, server, medium and device Active CN117392124B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311674642.5A CN117392124B (en) 2023-12-08 2023-12-08 Medical ultrasonic image grading method, system, server, medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311674642.5A CN117392124B (en) 2023-12-08 2023-12-08 Medical ultrasonic image grading method, system, server, medium and device

Publications (2)

Publication Number Publication Date
CN117392124A true CN117392124A (en) 2024-01-12
CN117392124B CN117392124B (en) 2024-02-13

Family

ID=89470510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311674642.5A Active CN117392124B (en) 2023-12-08 2023-12-08 Medical ultrasonic image grading method, system, server, medium and device

Country Status (1)

Country Link
CN (1) CN117392124B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381164A (en) * 2020-11-20 2021-02-19 北京航空航天大学杭州创新研究院 Ultrasound image classification method and device based on multi-branch attention mechanism
CN112700434A (en) * 2021-01-12 2021-04-23 苏州斯玛维科技有限公司 Medical image classification method and classification device thereof
CN114298234A (en) * 2021-12-31 2022-04-08 深圳市铱硙医疗科技有限公司 Brain medical image classification method and device, computer equipment and storage medium
WO2023151199A1 (en) * 2022-02-10 2023-08-17 华中科技大学同济医学院附属协和医院 Method for constructing cross-attention mechanism-based fracture image fine recognition network
CN117115452A (en) * 2023-09-12 2023-11-24 澳门理工大学 Controllable medical ultrasonic image denoising method, system and computer storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381164A (en) * 2020-11-20 2021-02-19 北京航空航天大学杭州创新研究院 Ultrasound image classification method and device based on multi-branch attention mechanism
CN112700434A (en) * 2021-01-12 2021-04-23 苏州斯玛维科技有限公司 Medical image classification method and classification device thereof
CN114298234A (en) * 2021-12-31 2022-04-08 深圳市铱硙医疗科技有限公司 Brain medical image classification method and device, computer equipment and storage medium
WO2023151199A1 (en) * 2022-02-10 2023-08-17 华中科技大学同济医学院附属协和医院 Method for constructing cross-attention mechanism-based fracture image fine recognition network
CN117115452A (en) * 2023-09-12 2023-11-24 澳门理工大学 Controllable medical ultrasonic image denoising method, system and computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柏志安;朱铁兵;: "基于影像云的智能辅助诊断在分级诊疗中的应用实践", 中国数字医学, no. 07, pages 114 - 116 *

Also Published As

Publication number Publication date
CN117392124B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
Tayal et al. DL-CNN-based approach with image processing techniques for diagnosis of retinal diseases
CN110599476B (en) Disease grading method, device, equipment and medium based on machine learning
Qiu et al. An initial investigation on developing a new method to predict short-term breast cancer risk based on deep learning technology
CN110459319B (en) Auxiliary diagnosis system of mammary gland molybdenum target image based on artificial intelligence
Gatos et al. Temporal stability assessment in shear wave elasticity images validated by deep learning neural network for chronic liver disease fibrosis stage assessment
US20220215548A1 (en) Method and device for identifying abnormal cell in to-be-detected sample, and storage medium
Ayas Multiclass skin lesion classification in dermoscopic images using swin transformer model
CN113939844A (en) Computer-aided diagnosis system for detecting tissue lesions on microscopic images based on multi-resolution feature fusion
Seo et al. A deep learning algorithm for automated measurement of vertebral body compression from X-ray images
Byra et al. Impact of ultrasound image reconstruction method on breast lesion classification with deep learning
Snider et al. An image classification deep-learning algorithm for shrapnel detection from ultrasound images
CN110555856A (en) Macular edema lesion area segmentation method based on deep neural network
Chen et al. General deep learning model for detecting diabetic retinopathy
CN112967778B (en) Accurate medicine application method and system for inflammatory bowel disease based on machine learning
US20210145389A1 (en) Standardizing breast density assessments
US11721023B1 (en) Distinguishing a disease state from a non-disease state in an image
Keller et al. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices
Shamrat et al. Analysing most efficient deep learning model to detect COVID-19 from computer tomography images
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
Malik et al. Multi-modal deep learning methods for classification of chest diseases using different medical imaging and cough sounds
CN117392124B (en) Medical ultrasonic image grading method, system, server, medium and device
WO2023280221A1 (en) Multi-scale 3d convolutional classification model for cross-sectional volumetric image recognition
CN108346471B (en) Pathological data analysis method and device
KR102472886B1 (en) Method for providing information on diagnosing renal failure and device using the same
Yang et al. Tumor detection from breast ultrasound images using mammary gland attentive u-net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant