CN116934683A - Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence - Google Patents

Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence Download PDF

Info

Publication number
CN116934683A
CN116934683A CN202310620325.9A CN202310620325A CN116934683A CN 116934683 A CN116934683 A CN 116934683A CN 202310620325 A CN202310620325 A CN 202310620325A CN 116934683 A CN116934683 A CN 116934683A
Authority
CN
China
Prior art keywords
spleen
segmentation
wound
ultrasonic
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310620325.9A
Other languages
Chinese (zh)
Inventor
蒋雪
宋斌
蒋波
罗广鑫
叶庆桂
宋文静
王琼
陈媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anti Terrorism And Special Police Corps Of Beijing Municipal Public Security Bureau
Fourth Medical Center General Hospital of Chinese PLA
Original Assignee
Anti Terrorism And Special Police Corps Of Beijing Municipal Public Security Bureau
Fourth Medical Center General Hospital of Chinese PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anti Terrorism And Special Police Corps Of Beijing Municipal Public Security Bureau, Fourth Medical Center General Hospital of Chinese PLA filed Critical Anti Terrorism And Special Police Corps Of Beijing Municipal Public Security Bureau
Priority to CN202310620325.9A priority Critical patent/CN116934683A/en
Publication of CN116934683A publication Critical patent/CN116934683A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Public Health (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Vascular Medicine (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to a method for assisting ultrasonic diagnosis of spleen wounds by artificial intelligence, and belongs to the field of artificial intelligence. According to the invention, medical imaging equipment is connected to portable equipment, and spleen position ultrasonic images of spleen trauma patients are obtained; sampling and frame extracting are carried out on the ultrasonic image video stream, and a video image frame sequence is generated; predicting the video image frame sequence by using the spleen wound segmentation model to obtain a corresponding segmentation prediction result sequence; predicting by using a spleen trauma grade classification model to obtain a spleen trauma grade classification label corresponding to the ultrasonic image; and outputting a corresponding visualized segmentation classification result. The invention improves the segmentation precision and accuracy, greatly reduces the storage space of the data set, fully utilizes the relevance of information between images, acquires full context information and improves the accuracy of wound grade classification.

Description

Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a method for assisting ultrasonic diagnosis of spleen wounds by using artificial intelligence.
Background
In the trauma treatment rescue work, the life of the rescue is the first important task, and the requirements on timeliness, accuracy and professionality are high. The wounded is transferred within one hour of gold, and medical intervention is rapidly performed, so that the method is an important guarantee for rescuing the wounded. However, if the attack occurs in a dense place of people in a large and medium city, traffic jam is a difficult problem of unavoidable emergency rescue, and once professional medical forces, vehicles and equipment are 'in progress', wounded people are 'difficult to sort', 'slow to transport', and the like, the timeliness of rescue can be directly affected; if the medical treatment is performed in a remote area far away from an urban area, medical treatment is delivered in time, which is a difficult problem; meanwhile, the proficiency of using ultrasonic instruments for medical staff in the field may also affect the accuracy of diagnosis. Therefore, the injury evaluation is rapidly, accurately and simply carried out on injured people, and the life is saved to the greatest extent.
Abdominal trauma is a common injury. The early symptoms of the closed abdominal injury are not obvious for some patients, but the early symptoms are often accompanied with serious problems of viscera, and more timely and accurate diagnosis is needed. Spleen is vulnerable in texture and abundant in blood supply, is the organ most easily damaged, and spleen wounds account for 13-25% of the frequent abdominal injuries, if diagnosis and treatment are not timely performed, the life of patients can be endangered. In the past, surgical procedures such as diagnostic lavage of the abdominal cavity and laparotomy probes have been commonly used to diagnose abdominal injuries. However, this tends to cause unnecessary trauma to the partially hemodynamically stable patient and increases the risk of complications such as infection. At present, preliminary screening of imaging has become a main way of rapidly assessing injury. The imaging examination method mainly comprises CT examination and ultrasonic examination. The CT examination has the advantages of high coincidence rate, and no influence of gas interference, patient respiration and the like in the examination process. However, CT cannot be moved at will due to its instrument conditions, and therefore, pre-hospital and emergency bedside patients cannot be scanned, and in addition, CT is radioactive, children and pregnant women increase the risk of radiation exposure, and examination is limited. CT scan layer thickness may also be missing for some tiny lesions. Thus, CT scanning has limitations. The ultrasonic device is convenient to operate, convenient to carry and low in operation difficulty, and can be applied to rapid injury assessment in pre-hospital and emergency rooms. Ultrasound can determine whether there is injury to the viscera by observing continuity of the spleen capsule, whether there is a subcapsular hematoma, whether the substantial echo is uniform. At the same time, ultrasound can also infer whether there is organ damage through some indirect signs, such as retroperitoneal hematoma, peritoneal effusion. Color doppler ultrasound can find abnormalities in blood supply within a lesion. However, conventional ultrasound is only about 41% sensitive to diagnosis of a substantial organ injury. The ultrasonic Contrast (Contrast-enhanced ultrasound, CEUS) can detect the injury of the parenchymal viscera and active hemorrhage from the multiple injuries of the abdomen, improves the accuracy of ultrasonic examination of spleen injury, provides a more reliable evaluation means for the injury of the parenchymal viscera, and makes up the shortages of conventional ultrasound. Studies have shown that CEUS is as accurate as CT in the detection and staging of traumatic spleen injury. Ultrasound contrast is a invasive examination and contrast agents are expensive, and therefore, examination has limitations.
The artificial intelligence is used as a strategy technology for leading the future, is increasingly applied to various fields of society, and in 2017, china goes out of the files of 'new generation artificial intelligence development planning', 'three years action planning (2018-2020) for promoting the development of new generation artificial intelligence industry', and the like, so that the development and the industrialized development of the artificial intelligence technology are promoted. In 2016, the first artificial intelligence ultrasonic series product in the world was released, and artificial intelligence began to be applied to diagnosis of thyroid nodule, and can improve the accuracy of thyroid nodule diagnosis. As one of the hot analysis methods of the current machine learning, the deep learning can be used for carrying out characterization learning on massive data by constructing a multi-layer artificial neural network, and the rapid increase of the graphic processing capability enables the development of the most advanced algorithm, and the algorithm has more stable and powerful image analysis capability. Therefore, the method is widely applied to diagnosis and identification of the ultrasonic images of all organs at present, and has high accuracy. At present, few reports about artificial intelligence auxiliary ultrasonic diagnosis of spleen wounds are provided, a deep learning mode is applied in the research, an artificial intelligence spleen wound segmentation classification model is established, and a lightweight spleen wound ultrasonic image segmentation classification system for obtaining segmentation classification prediction results from ultrasonic images is realized.
The Maorii He et al in Ultrasonic Image Diagnosis of Liver and Spleen Injury Based on a Double-Channel Convolutional Neural Network proposes a technical scheme of spleen injury ultrasonic image classification diagnosis algorithm based on a double-channel convolutional neural network. Firstly, the anisotropic diffusion denoising model is utilized to realize the data preprocessing of the spleen ultrasonic image, so that the image quality of the ultrasonic image is improved. And secondly, detecting the lesion position to acquire external edge features, and taking the extracted rotation-unchanged local binary pattern features of the image as internal texture features of the image. And finally, the two-channel convolutional neural network respectively uses texture information and edge information of the damaged image as two input channels to classify and identify the spleen damaged ultrasonic image.
Step 1: image preprocessing: due to the limitation of a medical ultrasonic image imaging mechanism, the noise interference of the spleen ultrasonic image is serious and mainly appears as additive thermal noise and multiplicative speckle noise, so that the spleen ultrasonic image needs to be denoised.
Step 2: extracting texture features: obtaining a shallow texture characteristic value of the input image by calculating a histogram of the local binary pattern translation value; depth characteristics of the image can be obtained by inputting the spleen ultrasound image to be processed into google net. And integrating the deep layer and the shallow layer, respectively carrying out normalization processing and fusion on the deep layer and the shallow layer, and thus obtaining the internal texture features.
Step 3: extracting edge information: the improved Canny edge detection method is used for detecting the contour edges of horizontal, vertical, 45 degrees and 135 degrees and extracting external edge features.
Step 4: classifying and identifying spleen injury ultrasonic images: in clinical diagnosis, the outer edge information and inner texture information of the spleen are often used as important basis for diagnosis by doctors. Thus, the external edge feature and the internal texture feature are referred to herein as two input channels of the convolutional neural network, respectively. Then, a spleen injury ultrasonic image classification algorithm based on a double-channel CNN is provided by combining a dynamic K-max pooling method.
The network consists of two relatively independent network structures. Each network consists of an input layer, a convolution layer, a pooling layer and a full connection layer. And at the input layer, the external edge features and the internal texture features are respectively used as input information of the double-channel convolutional neural network. At the first of the convolutional layers, the size of the convolutional kernel is set separately according to the characteristics, so two networks typically use different size convolutional checks to extract the two input layers. After pooling the layers, the two channels are connected to one fully connected layer for fully connected mapping, then the two fully connected layers are combined together by one fully connected layer, and then the extracted features are classified using a Softmax classifier.
In the prior art, there are several problems that are not applicable to the required scene of the present invention;
1. the output result is a classification result of each grade of wound, and the analysis is performed only by a single image, so that the image of the most serious wound part can not be acquired, and the classification is wrong. Meanwhile, the correlation of information among images is ignored because a plurality of images are not combined for analysis, and sufficient context information is not obtained, so that the characteristics are extracted insufficiently and the classification is inaccurate.
2. In the required scene of the invention, the spleen organ and the trauma occurrence position need to be accurately positioned, and the classification result of each level of trauma is insufficient to provide sufficient anatomical structure and pathological information, so that doctors need to expend energy to further screen and analyze.
3. The two feature extraction results are used as input information of the double convolution neural network, so that the time consumption is long, and the performance is low:
in the prior art, the network complexity of the two feature extraction and double convolution neural networks is high, and the operation performance is reduced. When processing the ultrasonic image video stream, the data volume is larger, and the technical scheme has low response speed and can not process correspondingly quickly.
Disclosure of Invention
First, the technical problem to be solved
The invention aims to solve the technical problems of high network complexity, reduced operation performance, large data volume, low response speed and incapability of performing corresponding processing quickly when processing an ultrasonic image video stream by providing a method for assisting ultrasonic diagnosis of spleen wounds by artificial intelligence in the prior art.
(II) technical scheme
In order to solve the technical problems, the invention provides a method for assisting ultrasonic diagnosis of spleen wounds by artificial intelligence, which comprises the following steps:
s1, connecting medical imaging equipment to portable equipment to acquire spleen position ultrasonic images of wounded persons;
s2, sampling and frame extraction processing is carried out on the ultrasonic image video stream, and a video image frame sequence is generated;
s3, importing the video image frame sequence generated by each ultrasonic image into a spleen wound ultrasonic image segmentation classification system, predicting the video image frame sequence by using a spleen wound segmentation model to obtain a corresponding segmentation prediction result sequence, wherein the segmentation prediction result sequence corresponding to each video image comprises spleen contour and/or wound contour position information;
s4, importing the segmentation prediction result sequence generated in the step S3 into a spleen wound ultrasonic image segmentation classification system, and predicting by using a spleen wound grade classification model to obtain a spleen wound grade classification label corresponding to the ultrasonic image;
s5, outputting a corresponding visualized segmentation classification result by the spleen wound ultrasonic image segmentation classification system.
Further, before the step S1, the method further includes: s0, training a spleen wound segmentation model and a spleen wound grade classification model in the spleen wound ultrasonic image segmentation classification system.
Further, the step S0 specifically includes:
s0.1, training ultrasonic images comprise spleen wounds and normal spleen ultrasonic videos, each training ultrasonic image is subjected to frame extraction processing to generate a group of training image frame sequences, a group of manual labeling segmentation image frame sequences and corresponding spleen wound grade classification labels are corresponding to the training image frame sequences, and an original image data set, a segmentation label data set and a classification label data set are respectively formed; wherein the sequence of manually annotated segmented image frames comprises spleen contour and/or wound contour position information,
s0.2, training a spleen wound segmentation model based on the original image data set and the segmentation label data set obtained in the step S0.1;
s0.3, training the spleen wound grade classification model based on the segmentation label data set and the classification label data set obtained in the step S0.1.
Further, the spleen trauma segmentation model and the spleen trauma grade classification model obtained through training in the step S0 are respectively converted into model scripts through a TorchScript method, the spleen trauma ultrasound image segmentation classification system loads the scripts as a prediction tool, and visual prediction results are output in a visual interface.
Further, in step S2, the OpenCV video processing technology is used to uniformly extract the multi-frame images of the corresponding video, and perform data preprocessing to remove the irrelevant information.
Further, in the step S2, a frame extraction scheme based on a time interval, a frame interval or an inter-frame differential strength is adopted.
Further, in the step S3, the spleen wound segmentation model adopts a residual network structure in combination with a U-Net architecture, and the specific network structure is designed as follows:
s3.1, the input video image frame sequence is enabled to have the same size and dimension as the feature image size and dimension of the corresponding decoding stage through a three-dimensional convolution layer by an image scaling method;
s3.2, sequentially passing through a three-dimensional convolution layer, a BatchNorm normalization layer, a ReLU activation function and a maximum pooling layer, then performing supervision and loss calculation of a coding stage (encoder), wherein the coding comprises four stages, namely 3, 4, 6 and 3 coding network structures, the input of each stage is downsampling of the output result of the previous stage, and meanwhile, the output result of each stage is stored;
s3.3, performing supervision and loss calculation of a decoding stage (decoder), and performing four-stage decoding, wherein the first stage uses the output of a fourth encoding stage as input, the second, third and fourth stages use the up-sampling of the output result of the last decoding stage and the output of the third, second and first encoding stages as input respectively, each decoding stage comprises a convolution layer, a BatchNorm normalization layer and a ReLU activation function, and the fourth stage convolves the final result after outputting each channel feature diagram to obtain a segmentation prediction result sequence.
Further, the coding network structure is a three-dimensional ResNet residual network structure, a two-dimensional combined three-dimensional ResNet residual network structure, a three-dimensional CNN network structure or an LSTM network structure.
Further, in the step S4, the network structure of the spleen trauma class classification model includes: the method comprises the steps of inputting a segmentation prediction result sequence into a first three-dimensional ResNet residual network structure, inputting a downsampling code of the first three-dimensional ResNet residual network structure into a second three-dimensional ResNet residual network structure, inputting a downsampling code of the second three-dimensional ResNet residual network structure into the Softmax classifier, and classifying the extracted features through the Softmax classifier to obtain spleen wound grade classification labels corresponding to the input segmentation prediction result sequence.
Further, the ResNet residual network structure comprises one or more residual modules, and a short-circuit connection is added between the input and the output of the residual modules, namely the output of the residual modules is the result of para-addition of the learning characteristic F (x) and the input x through the weight matrix.
(III) beneficial effects
The invention provides a method for assisting ultrasonic diagnosis of spleen wounds by artificial intelligence, which carries out model parameter training through spleen wounds or normal spleen ultrasonic videos, has the capability of analyzing ultrasonic video streams, analyzes the video streams, and generates an original image sequence which is beneficial to complement single image characteristics, so that a model fully understands context information and improves segmentation accuracy. Meanwhile, the frame extraction processing can utilize information in the video to the greatest extent, and the storage space of the data set is greatly reduced. On this basis, predictive segmentation of Shan Zhangpi lesion images can also be supported.
The ultrasonic diagnosis spleen wound segmentation model adopts a residual network structure and combines a U-Net framework, so that the advantages of the two network structures are obtained, and the focus segmentation accuracy is improved.
In the invention, step S4 is combined with a plurality of spleen wound ultrasonic image segmentation results to comprehensively judge and classify on the basis of an ultrasonic diagnosis spleen wound grade classification model, the relevance of information among images is fully utilized, the sufficient context information is obtained, and the accuracy of wound grade classification is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flow chart of data preprocessing;
FIG. 3 is a network structure diagram of a spleen wound segmentation model for ultrasonic diagnosis;
FIG. 4 is a diagram of a spleen wound grade classification network in accordance with an embodiment;
FIG. 5 is a schematic diagram of a spleen wound ultrasound image segmentation classification system;
FIG. 6 is a schematic diagram of a residual structure;
FIG. 7 is a schematic diagram of a U-Net network architecture.
Detailed Description
To make the objects, contents and advantages of the present invention more apparent, the following detailed description of the present invention will be given with reference to the accompanying drawings and examples.
The invention aims to provide a spleen ultrasonic image wound diagnosis method based on deep learning, which is used for analyzing an image video generated in an ultrasonic diagnosis process and identifying spleen anatomical structure outlines and spleen wound positions which appear in an image sequence generated by dividing the video, thereby completing classification of spleen wound grades and realizing diagnosis of artificial intelligent ultrasonic spleen wounds. The invention also provides a lightweight spleen wound ultrasonic image segmentation classification system which can be directly operated on portable equipment, is suitable for pre-hospital emergency site environment, and outputs spleen segmentation visual results and wound classification results.
The invention relates to a trauma diagnosis method based on deep learning spleen ultrasonic images, which comprises the following steps:
s1, connecting medical imaging equipment to portable equipment to obtain spleen position ultrasonic images of a patient suffering from spleen trauma;
s2, sampling and frame extraction processing is carried out on the ultrasonic image video stream, and a video image frame sequence is generated;
s3, importing the video image frame sequence generated by each ultrasonic image into a spleen wound ultrasonic image segmentation classification system, predicting the video image frame sequence by using a spleen wound segmentation model to obtain a corresponding segmentation prediction result sequence, wherein the segmentation prediction result sequence corresponding to each video image comprises spleen contour and/or wound contour position information;
s4, importing the segmentation prediction result sequence generated in the step S3 into a spleen wound ultrasonic image segmentation classification system, and predicting by using a spleen wound grade classification model to obtain a spleen wound grade classification label corresponding to the ultrasonic image;
s5, outputting a corresponding visualized segmentation classification result by the spleen wound ultrasonic image segmentation classification system.
Example 1:
as shown in fig. 1, the method for the artificial intelligence-assisted ultrasonic diagnosis of spleen wounds of the present invention comprises the steps of:
s1, connecting medical imaging equipment to the portable equipment to acquire spleen ultrasonic images of the wounded.
S2, extracting image frames in the video according to the time interval or the frame interval, sampling and frame extraction processing is carried out on the ultrasonic image video stream, multi-frame images of the corresponding video are uniformly extracted by optionally utilizing video processing technologies such as OpenCV, data preprocessing is carried out at the same time, irrelevant information is removed, and a video image frame sequence is generated, as shown in FIG. 2.
S3, importing each ultrasonic image into a spleen wound ultrasonic image segmentation classification system according to the video image frame sequence generated in the step S2, predicting the video image frame sequence by using a spleen wound segmentation model, and obtaining a corresponding segmentation prediction result sequence, wherein the segmentation prediction result sequence corresponding to each video image comprises position information such as spleen outline and/or wound outline.
The ultrasonic diagnosis spleen wound segmentation model adopts a residual network structure and a U-Net architecture, and the specific network structure is designed as follows:
s3.1, the input video image frame sequence is amplified or reduced by an image scaling method, preferably a bilinear difference method, and then the size and dimension of the input video image frame sequence are the same as those of the feature image in the corresponding decoding stage by a three-dimensional convolution layer.
S3.2, sequentially passing through a three-dimensional convolution layer, a BatchNorm normalization layer, a ReLU activation function and a maximum pooling layer, then performing supervision and loss calculation of a coding stage (encoder), wherein the coding comprises four stages, namely 3, 4, 6 and 3 three-dimensional ResNet residual network structures, each ResNet network uses residual learning to solve the degradation problem, the input of each stage is downsampling of the output result of the previous stage, and meanwhile, the output result of each stage is stored.
S3.3, performing supervision and loss calculation of a decoding stage (decoder), and performing four-stage decoding, wherein the first stage uses the output of a fourth coding stage as input, the second, third and fourth stages use the up-sampling of the output result of the last decoding stage and the output of the third, second and first coding stages as input respectively, each decoding stage comprises a convolution layer, a BatchNorm normalization layer, a ReLU activation function and the like, and the final result is convolved after the fourth stage outputs each channel feature diagram, so as to obtain a segmentation prediction result sequence. As shown in fig. 3.
S4, importing the segmentation prediction result sequence generated in the step S3 into a spleen wound ultrasonic image segmentation classification system, and predicting by using a spleen wound grade classification model to obtain a spleen wound grade classification label corresponding to the ultrasonic image.
The network structure of the spleen wound grade classification model comprises: the method comprises the steps of inputting a segmentation prediction result sequence into a first three-dimensional ResNet residual network structure, inputting a downsampling code of the first three-dimensional ResNet residual network structure into a second three-dimensional ResNet residual network structure, inputting a downsampling code of the second three-dimensional ResNet residual network structure into the Softmax classifier, and classifying the extracted features through the Softmax classifier to obtain spleen wound grade classification labels corresponding to the input segmentation prediction result sequence. As shown in fig. 4.
S5, outputting a corresponding visualized segmentation classification result by the spleen wound ultrasonic image segmentation classification system.
Further, the method further comprises, before step S1:
s0, training a spleen wound segmentation model and a spleen wound grade classification model in a spleen wound ultrasonic image segmentation classification system, wherein the training method specifically comprises the following steps:
s0.1, training ultrasonic images comprise spleen wounds and normal spleen ultrasonic videos, each training ultrasonic image is subjected to frame extraction processing in the mode of step S2 to generate a group of training image frame sequences, a group of manual labeling segmentation image frame sequences and a classification label corresponding to the spleen wounds are corresponding to the group of training image frame sequences, and an original image dataset, a segmentation label dataset and a classification label dataset are respectively formed; wherein the manually marked segmented image frame sequence comprises position information such as spleen contour and/or wound contour,
s0.2, training a spleen wound segmentation model based on the original image data set and the segmentation label data set obtained in the step S0.1;
s0.3, training the spleen wound grade classification model based on the segmentation label data set and the classification label data set obtained in the step S0.1.
Further, the spleen wound segmentation model and the spleen wound classification model obtained by training mentioned in step S0 are respectively converted into model scripts by the TorchScript method, the model is converted from a pure Python program into a TorchScript program which can run independently of Python, and the model scripts can be loaded into a process without Python dependence.
The spleen wound ultrasonic image segmentation classification system loads the script as a prediction tool, and outputs visual prediction results in a visual interface, and the system can operate in a portable equipment environment without Python dependence, so that the weight reduction and portability are realized. As shown in fig. 5.
Further, in the step S2, the frame extraction scheme is adjusted
The frame extraction scheme based on the time interval or the frame interval, such as the frame extraction scheme based on the inter-frame differential strength, and the like, can be used as an alternative scheme for completing the image sequence flow of frame extraction formation in the artificial intelligent auxiliary ultrasonic diagnosis spleen wound system.
Further, in the step S3, the spleen wound segmentation model structure is adjusted
The network structure of the spleen trauma segmentation model for ultrasonic diagnosis is adjusted, for example, a three-dimensional ResNet network in the network is adjusted to be in a two-dimensional and three-dimensional combined form, or other three-dimensional CNN network structures or LSTM network structures are adopted, so that the network structure can be used as an alternative scheme for segmentation prediction flow in the spleen trauma ultrasound image segmentation classification system.
Further, in the step S4, the spleen wound level classification model structure is adjusted
The network structure of the spleen trauma grade classification model for ultrasonic diagnosis, such as other CNN multi-classification network structures, can be used as an alternative scheme of classification prediction flow in the spleen trauma ultrasonic image segmentation classification system.
Further, the Residual Network (simply referred to as ResNet) includes one or more Residual modules, as shown in fig. 6, where a short-circuit connection is added between the input and the output of the Residual module, that is, the output of the Residual module is the result of adding the learning feature F (x) of the weight matrix and the input x in para-position, so as to avoid the gradient vanishing problem possibly caused by the deep convolutional Network, and the degradation problem is solved by using Residual learning in the cross-layer data channel, so that an effective deep neural Network can be trained.
Further, as shown in fig. 7, the U-net uses a classical encoding-decoding (encoder-decoder) structure, and the network is composed of two parts: the encoding stage uses a contraction path (puncturing path) to gradually reduce the spatial dimension of the feature map to obtain advanced context information, and the decoding stage uses a symmetric expansion path (expanding path) to perform resolution recovery in response to upsampling. In addition, the skip layer connection for transmitting the output of the encoder to the corresponding level decoder is adopted to reduce the space information loss caused by the downsampling process, and the up-sampling restored feature map contains more lower-layer semantic information by splicing the feature map in the channel dimension, so that focus details and space dimension are restored for accurate positioning. Because of its shape resembling a "U", it is called U-Net.
In the invention, the ultrasonic diagnosis spleen wound segmentation model adopts ResNet as a backBone of U-Net, introduces a residual network structure in the original U-Net, and directly connects an encoder with a decoder to improve the accuracy and reduce the processing time to a certain extent. In this way, the lost information of different layers in the coding part is reserved, and meanwhile, no additional parameters and operation are added when the lost information is relearned, so that the accuracy of spleen wound segmentation is greatly improved as a whole.
According to the invention, model parameter training is carried out through spleen wound or normal spleen ultrasonic video, the capacity of analyzing ultrasonic video flow is provided, the video flow is analyzed, and the generated original image sequence is helpful for supplementing single image characteristics, so that the model fully understands context information, and the segmentation precision is improved. Meanwhile, the frame extraction processing can utilize information in the video to the greatest extent, and the storage space of the data set is greatly reduced. On this basis, predictive segmentation of Shan Zhangpi lesion images can also be supported.
The ultrasonic diagnosis spleen wound segmentation model adopts a residual network structure and combines a U-Net framework, so that the advantages of the two network structures are obtained, and the focus segmentation accuracy is improved.
In the invention, step S4 is combined with a plurality of spleen wound ultrasonic image segmentation results to comprehensively judge and classify on the basis of an ultrasonic diagnosis spleen wound grade classification model, the relevance of information among images is fully utilized, the sufficient context information is obtained, and the accuracy of wound grade classification is improved.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (10)

1. A method for artificially intelligently assisting in ultrasonic diagnosis of spleen wounds, comprising the steps of:
s1, connecting medical imaging equipment to portable equipment to acquire spleen position ultrasonic images of wounded persons;
s2, sampling and frame extraction processing is carried out on the ultrasonic image video stream, and a video image frame sequence is generated;
s3, importing the video image frame sequence generated by each ultrasonic image into a spleen wound ultrasonic image segmentation classification system, predicting the video image frame sequence by using a spleen wound segmentation model, and obtaining a corresponding segmentation prediction result sequence, wherein the segmentation prediction result sequence comprises spleen contour and/or wound contour position information;
s4, importing the segmentation prediction result sequence generated in the step S3 into a spleen wound ultrasonic image segmentation classification system, and predicting by using a spleen wound grade classification model to obtain a spleen wound grade classification label corresponding to the ultrasonic image;
s5, outputting a corresponding visualized segmentation classification result by the spleen wound ultrasonic image segmentation classification system.
2. The method for the artificial intelligence-assisted ultrasonic diagnosis of spleen wounds according to claim 1, wherein the step S1 is preceded by: s0, training a spleen wound segmentation model and a spleen wound grade classification model in the spleen wound ultrasonic image segmentation classification system.
3. The method for assisting ultrasonic diagnosis of spleen trauma according to claim 2, wherein said step S0 comprises:
s0.1, training ultrasonic images comprise spleen wounds and normal spleen ultrasonic images, each training ultrasonic image is subjected to frame extraction processing to generate a group of training image frame sequences, a group of manual labeling segmentation image frame sequences and a classification label corresponding to the spleen wounds are corresponding to the training image frame sequences, and an original image data set, a segmentation label data set and a classification label data set are respectively formed; wherein the sequence of manually annotated segmented image frames comprises spleen contour and/or wound contour position information,
s0.2, training a spleen wound segmentation model based on the original image data set and the segmentation label data set obtained in the step S0.1;
s0.3, training the spleen wound grade classification model based on the segmentation label data set and the classification label data set obtained in the step S0.1.
4. The method for the artificial intelligence-assisted ultrasonic diagnosis of spleen wounds according to claim 3, wherein the spleen wound segmentation model and the spleen wound classification model obtained by training in the step S0 are respectively converted into model scripts by a TorchScript method, and the spleen wound ultrasound image segmentation classification system loads the scripts as a prediction tool and outputs visual prediction results in a visual interface.
5. The method for assisting ultrasonic diagnosis of spleen trauma according to claim 1, wherein in step S2, multiple frames of images of the corresponding video are uniformly extracted by using OpenCV video processing technology, and data preprocessing is performed to remove irrelevant information.
6. The method for assisting ultrasonic diagnosis of spleen trauma according to claim 1, wherein the frame extraction in step S2 is performed by using a frame extraction scheme based on time intervals, frame intervals or inter-frame differential intensities.
7. The method for assisting in ultrasonic diagnosis of spleen trauma according to any one of claims 1-6, wherein in step S3, the spleen trauma segmentation model adopts a residual network structure in combination with a U-Net architecture, and the specific network structure is designed as follows:
s3.1, the input image frame sequence is enabled to have the same size and dimension as the size and dimension of the feature image in the corresponding decoding stage through a three-dimensional convolution layer by an image scaling method;
s3.2, sequentially passing through a three-dimensional convolution layer, a BatchNorm normalization layer, a ReLU activation function and a maximum pooling layer, then performing supervision and loss calculation of a coding stage (encoder), wherein the coding comprises four stages, namely 3, 4, 6 and 3 coding network structures, the input of each stage is downsampling of the output result of the previous stage, and meanwhile, the output result of each stage is stored;
s3.3, performing supervision and loss calculation of a decoding stage (decoder), and performing four-stage decoding, wherein the first stage uses the output of a fourth encoding stage as input, the second, third and fourth stages use the up-sampling of the output result of the last decoding stage and the output of the third, second and first encoding stages as input respectively, each decoding stage comprises a convolution layer, a BatchNorm normalization layer and a ReLU activation function, and the fourth stage convolves the final result after outputting each channel feature diagram to obtain a segmentation prediction result sequence.
8. The method for artificially intelligently assisting in ultrasonic diagnosis of spleen trauma according to claim 7, wherein the encoding network structure is a three-dimensional ResNet residual network structure, a two-dimensional combined three-dimensional ResNet residual network structure, a three-dimensional CNN network structure or an LSTM network structure.
9. The method for assisting ultrasonic diagnosis of spleen trauma according to claim 8, wherein in step S4, the network structure of the spleen trauma class classification model comprises: the method comprises the steps of inputting a segmentation prediction result sequence into a first three-dimensional ResNet residual network structure, inputting a downsampling code of the first three-dimensional ResNet residual network structure into a second three-dimensional ResNet residual network structure, inputting a downsampling code of the second three-dimensional ResNet residual network structure into the Softmax classifier, and classifying the extracted features through the Softmax classifier to obtain spleen wound grade classification labels corresponding to the input segmentation prediction result sequence.
10. The method for assisting ultrasonic diagnosis of spleen trauma according to claim 9, wherein the ResNet residual network structure comprises one or more residual modules, and a short-circuit connection is added between the input and the output of the residual modules, namely the output of the residual modules is the result of para-addition of the learning characteristic F (x) and the input x through a weight matrix.
CN202310620325.9A 2023-05-30 2023-05-30 Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence Pending CN116934683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310620325.9A CN116934683A (en) 2023-05-30 2023-05-30 Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310620325.9A CN116934683A (en) 2023-05-30 2023-05-30 Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence

Publications (1)

Publication Number Publication Date
CN116934683A true CN116934683A (en) 2023-10-24

Family

ID=88381712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310620325.9A Pending CN116934683A (en) 2023-05-30 2023-05-30 Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence

Country Status (1)

Country Link
CN (1) CN116934683A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409002A (en) * 2023-12-14 2024-01-16 常州漫舒医疗科技有限公司 Visual identification detection system for wounds and detection method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903292A (en) * 2019-01-24 2019-06-18 西安交通大学 A kind of three-dimensional image segmentation method and system based on full convolutional neural networks
CN113870258A (en) * 2021-12-01 2021-12-31 浙江大学 Counterwork learning-based label-free pancreas image automatic segmentation system
CN114998674A (en) * 2022-05-12 2022-09-02 南京航空航天大学 Device and method for tumor focus boundary identification and grade classification based on contrast enhanced ultrasonic image
CN115946999A (en) * 2022-12-23 2023-04-11 中科迈航信息技术有限公司 Garbage classification method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903292A (en) * 2019-01-24 2019-06-18 西安交通大学 A kind of three-dimensional image segmentation method and system based on full convolutional neural networks
CN113870258A (en) * 2021-12-01 2021-12-31 浙江大学 Counterwork learning-based label-free pancreas image automatic segmentation system
CN114998674A (en) * 2022-05-12 2022-09-02 南京航空航天大学 Device and method for tumor focus boundary identification and grade classification based on contrast enhanced ultrasonic image
CN115946999A (en) * 2022-12-23 2023-04-11 中科迈航信息技术有限公司 Garbage classification method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周晨芳;: "基于改进的3D U-Net的肺结节检测算法", 现代信息科技, no. 12, 25 June 2020 (2020-06-25) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409002A (en) * 2023-12-14 2024-01-16 常州漫舒医疗科技有限公司 Visual identification detection system for wounds and detection method thereof

Similar Documents

Publication Publication Date Title
CN112529894B (en) Thyroid nodule diagnosis method based on deep learning network
CN111429473B (en) Chest film lung field segmentation model establishment and segmentation method based on multi-scale feature fusion
US11704808B1 (en) Segmentation method for tumor regions in pathological images of clear cell renal cell carcinoma based on deep learning
Chen et al. Discriminative cervical lesion detection in colposcopic images with global class activation and local bin excitation
Jiang et al. Deep learning for COVID-19 chest CT (computed tomography) image analysis: A lesson from lung cancer
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
CN111553892A (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN113420826A (en) Liver focus image processing system and image processing method
CN114565572A (en) Cerebral hemorrhage CT image classification method based on image sequence analysis
CN116934683A (en) Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence
CN113240654A (en) Multi-dimensional feature fusion intracranial aneurysm detection method
CN117152433A (en) Medical image segmentation method based on multi-scale cross-layer attention fusion network
CN115620912A (en) Soft tissue tumor benign and malignant prediction model construction method based on deep learning
CN116188424A (en) Method for establishing artificial intelligence assisted ultrasonic diagnosis spleen and liver wound model
CN114764855A (en) Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning
CN116433586A (en) Mammary gland ultrasonic tomography image segmentation model establishment method and segmentation method
CN115409812A (en) CT image automatic classification method based on fusion time attention mechanism
CN115471512A (en) Medical image segmentation method based on self-supervision contrast learning
Dong et al. Segmentation of pulmonary nodules based on improved UNet++
CN116385814B (en) Ultrasonic screening method, system, device and medium for detection target
Lin et al. Research of Lung Nodule Segmentation Algorithm Based on 3D U-Net Network
CN117636064B (en) Intelligent neuroblastoma classification system based on pathological sections of children
CN117197434B (en) Pulmonary medical image accurate identification method based on AMFNet network fusion model
Han et al. U-CCNet: Brain Tumor MRI Image Segmentation Model with Broader Global Context Semantic Information Abstraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination