CN114188021A - Intelligent analysis system for children intussusception diagnosis based on multi-mode fusion - Google Patents

Intelligent analysis system for children intussusception diagnosis based on multi-mode fusion Download PDF

Info

Publication number
CN114188021A
CN114188021A CN202111520413.9A CN202111520413A CN114188021A CN 114188021 A CN114188021 A CN 114188021A CN 202111520413 A CN202111520413 A CN 202111520413A CN 114188021 A CN114188021 A CN 114188021A
Authority
CN
China
Prior art keywords
intussusception
children
model
ultrasonic image
child
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111520413.9A
Other languages
Chinese (zh)
Other versions
CN114188021B (en
Inventor
俞刚
李哲明
黄坚
沈忱
李竞
宋春泽
黄寿奖
陈星�
段梦宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111520413.9A priority Critical patent/CN114188021B/en
Publication of CN114188021A publication Critical patent/CN114188021A/en
Application granted granted Critical
Publication of CN114188021B publication Critical patent/CN114188021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/288Entity relationship models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Animal Behavior & Ethology (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a child intussusception diagnosis intelligent analysis system based on multi-mode fusion, which comprises a child intussusception relation extraction model, a text feature extraction model, an ultrasonic image quality control model, a structured data feature extraction model and a feature fusion model; the child intussusception relation extraction model is used for constructing a child intussusception knowledge map; the text feature extraction model is used for extracting text features in an original diagnosis medical record and an ultrasonic diagnosis report; the ultrasonic image quality control model is used for extracting image characteristics in the ultrasonic image of the intussusception of the child; the structured data feature extraction model is used for extracting the structured data features in the children intussusception laboratory data; the feature fusion model is used for fusing text features, image features and structural data features, and after high-dimensional features are output, the prediction output of the child intussusception is mapped into probability distribution by using a normalization function. The invention can realize the rapid diagnosis of the intussusception diseases of children, shorten the diagnosis time and reduce the false positive rate.

Description

Intelligent analysis system for children intussusception diagnosis based on multi-mode fusion
Technical Field
The invention belongs to the field of medical artificial intelligence, and particularly relates to a child intussusception diagnosis intelligent analysis system based on multi-mode fusion.
Background
Intussusception is a surgical acute abdomen disease of children, is common clinically, and mainly takes children under 2 years old as a patient. Intussusception is mainly a phenomenon that two sections of connected intestinal tubes are mutually sleeved, early and timely diagnosis and active correct treatment can prevent the intestinal tubes from necrosis, and the pain of children patients is relieved.
Ultrasound diagnosis is an atraumatic and painless examination, which is easily accepted by children and their families, and typical ultrasonic sonograms of intussusception in children can be summarized as two signs: firstly, the cross section presents the symptoms of 'concentric circles', and secondly, the longitudinal section presents the symptoms of 'sleeve marks', so that doctors mostly judge whether the intussusception problem occurs to patients through the identification of the symptoms of 'concentric circles', but increasing image data also brings burden to the diagnosis and treatment of the doctors.
With the development of the deep learning technology, the traditional reading mode relying on manual reading of doctors is broken, and the deep learning technology based on data driving enables a computer to assist in finding focuses and improving the accuracy of diagnosis through the combination of the imaging technology, the medical image processing technology and the analysis and calculation of the computer.
For example, chinese patent publication No. CN107133942A discloses a medical image processing method based on deep learning. The medical image processing method based on deep learning comprises the following steps: selecting a marked medical training set image to train and adapt the migrated neural network model to obtain a trained medical diagnosis model; converting the image format of the medical picture according to the requirement of the medical diagnosis model, and performing image enhancement processing on the medical picture; and extracting the bottleneck characteristic of the medical picture, performing image diagnosis by using the medical diagnosis model according to the bottleneck characteristic, and outputting a diagnosis result.
Chinese patent publication No. CN109215021A discloses a method for quickly recognizing cholelithiasis CT medical images based on deep learning, which includes: 1) collecting CT medical images of cholelithiasis to construct a training set; 2) processing the images in the training set to generate a required training sample; 3) carrying out deep learning-based gallstone CT medical image identification training by using a training sample to generate a trained gallstone CT medical image rapid identification model; 4) acquiring a new CT medical image of the cholelithiasis, and constructing a verification set; 5) the model is verified using the verification set image.
However, for the various clinical manifestations of children intussusception, complicated etiology and disease conditions and discrete disease data (medical history, examination, etc.), the diagnosis time for diagnosing and treating the children intussusception diseases is long, and the false positive rate of ultrasonic examination in primary hospitals is high. Meanwhile, the traditional diagnosis and treatment process is difficult, the doctor-patient communication waiting period is long, the interaction quality is low, and the response is slow.
Therefore, an intelligent analysis system for children intussusception diagnosis is urgently needed to be designed, so that the diagnosis time is shortened, the false positive rate is reduced, the treatment level of children intussusception diseases in China is comprehensively improved, and the medical resources are reasonably and effectively exerted.
Disclosure of Invention
The invention provides an intelligent analysis system for children intussusception diagnosis based on multi-mode fusion, which can realize the rapid diagnosis of children intussusception diseases, thereby shortening the diagnosis time and reducing the false positive rate.
A child intussusception diagnosis intelligent analysis system based on multi-modal fusion comprises a computer memory, a computer processor and a computer program which is stored in the computer memory and can be executed on the computer processor, wherein a trained child intussusception relation extraction model, a text feature extraction model, an ultrasonic image quality control model, a structured data feature extraction model and a feature fusion model are stored in the computer memory;
the child intussusception relation extraction model identifies entities in special cases of child intussusception by adopting a method of combining BiLSTM and CRF, and carries out named entity identification on a child intussusception unstructured text data set by adopting the combined model; meanwhile, the relationship in the special disease of the intussusception of the children is extracted by adopting a BilSTM and attention mechanism, so that a knowledge map of the intussusception of the children based on the chief complaint of abdominal pain is constructed;
the text feature extraction model is used for extracting text features in an original diagnosis medical record and an ultrasonic diagnosis report by combining a constructed children intussusception knowledge map and applying a natural language processing technology;
the ultrasonic image quality control model is used for extracting image characteristics in the ultrasonic image of the intussusception of the child;
the structural data feature extraction model is used for extracting structural data features in the data of the children intussusception laboratory;
the feature fusion model is used for fusing text features, image features and structural data features, and after high-dimensional features are output, the prediction output of the child intussusception is mapped into probability distribution by using a normalization function.
Further, the construction process of the children intussusception knowledge map mainly comprises entity extraction and relation extraction, and specifically comprises the following steps:
(1) collecting relevant documents and expert consensus of the special disease of the intussusception of the children, and collecting original medical record data and an ultrasonic image report of the intussusception of the children in the last decade of a children hospital;
(2) preprocessing collected data, performing word segmentation and labeling, and dividing a training set and a test set according to a ratio of 7: 3;
(3) constructing a children intussusception entity extraction model by using BilSTM-CRF training; firstly, acquiring a word vector representation related to the following text by using a BilSTM layer, adding necessary constraint to data labeling by a CRF layer to ensure that the data labeling is reasonable and effective, and finally extracting to obtain an entity set;
(4) applying the extracted entity to mark a data set, and dividing a training set and a test set according to the proportion of 7: 3;
(5) extracting the relation in the special disease of the intussusception of the children by adopting a BilSTM and attention mechanism, and constructing an intussusception relation extraction model of the children;
(6) and after the entity information and the relations among different entities are extracted from the language segments, storing the extracted entity-relation-entity triple into a Neo4j database, and describing the triple into a graphical knowledge map structure to complete the construction and visualization of the children intussusception specific disease knowledge map.
In the step (2), the pretreatment comprises desensitization and cleaning operations.
Furthermore, the ultrasonic image quality control model comprises an image feature extraction model, an ultrasonic image full-scanning model and a concentric circle vertical and horizontal section detection model; the ultrasonic image full-scan model adopts an improved YOLOv5s network, and a convolution attention module CBAM and a Neck part of the YOLOv5s network are fused, so that the feature extraction capability of the network is improved.
In the training process of the ultrasonic image quality control model, the manufacturing process of a training data set is as follows:
collecting the ultrasonic images of the intussusception of the children in the last decade in an ultrasonic image system of the children hospital;
consulting related ultrasonic image files, consulting ultrasonic image experts, and determining the standards of the ultrasonic image quality of the intussusception of the children, wherein the ultrasonic image quality comprises that the transverse section is in a concentric circle sign, the longitudinal section is in a sleeve sign, and the blood flow signals of the local intestinal wall are increased;
ensuring that the ultrasonic image is scanned to the kidney area, and screening a standard ultrasonic image by 2 ultrasonic doctors;
and (3) carrying out image annotation of each standard on the standard section by using Pair intelligent annotation software by 2 experts according to the ultrasonic standard section, and taking the annotated image as a data set of a training image feature extraction model.
In the ultrasonic image full-model scanning training process, the CIoU Loss is adopted to replace the GIoU Loss as a regression Loss function of the target boundary box, so that the regression speed of the boundary box is accelerated, the positioning precision is improved, and the problem of missed detection is solved.
When the image characteristics in the ultrasonic image quality control model of the intussusception of the child are extracted, firstly, unifying the ultrasonic images into a state that the kidney is positioned at the upper part or the lower part of the image through an image preprocessing technology, and detecting whether the kidney is scanned through an ultrasonic image full-scan model and returning a kidney area detection result; and further detecting whether the ultrasonic image scans the cross section of the concentric circle by using a concentric circle longitudinal and transverse section detection model, and if the cross section of the concentric circle is detected, continuously detecting whether the ultrasonic image scans the longitudinal section of the concentric circle.
Further, after the feature fusion model outputs high-dimensional features, the prediction output of the child intussusception is mapped into probability distribution by using a normalization function Softmax, and the assumed prediction value is
Figure BDA0003407097280000051
Then:
Figure BDA0003407097280000052
wherein S belongs to S as text characteristic input, V belongs to V as ultrasonic image characteristic input, F belongs to F as structured data input, so as to ensure that the probability of predicting a target D belongs to D is an optimal target.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention constructs the knowledge map of the special disease of the intussusception of the children based on multi-mode fusion by using a natural language processing technology, can express the relation between entities in the special disease of the intussusception of the children most effectively and intuitively, and realizes efficient query and knowledge reasoning.
2. The invention uses an improved target detection YOLOv5s algorithm to detect whether a kidney is scanned or not and returns a kidney region detection result, and a convolution attention module CBAM and a Neck part of a YOLOv5s network are fused, so that the characteristic extraction capability of the network is improved and the aim of improving the target detection effect is fulfilled.
3. According to the feature fusion model, the text features, the image features and the structured data features are fused, after the high-dimensional features are output, the prediction output of the intussusception of the children is mapped into probability distribution by using a normalization function, and the accuracy of prediction is greatly improved.
Drawings
FIG. 1 is a block diagram of an intelligent analysis system for children intussusception diagnosis based on multi-modal fusion according to the present invention;
FIG. 2 is a flow chart of the construction of a knowledge map of intussusception of children of the present invention;
FIG. 3 is a frame diagram of an ultrasonic image quality control model for extracting image features from an ultrasonic image of intussusception of children according to the present invention;
figure 4 is a schematic representation of ultrasonic signs of intussusception in children in accordance with an embodiment of the present invention;
FIG. 5 is a network structure diagram of the ultrasound image full scan model of the present invention;
FIG. 6 is a block diagram of the convolution attention module CBAM of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
As shown in fig. 1, an intelligent analysis system for children intussusception diagnosis based on multi-modal fusion comprises a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein the computer memory stores a trained children intussusception relation extraction model, a text feature extraction model, an ultrasound image quality control model, a structured data feature extraction model and a feature fusion model.
The child intussusception relation extraction model is used for constructing a child intussusception knowledge map based on the abdominal pain chief complaint; the text feature extraction model is used for extracting text features in an original diagnosis medical record and an ultrasonic diagnosis report by combining the constructed children intussusception knowledge map and applying a natural language processing technology; the ultrasonic image quality control model is used for extracting image characteristics in the ultrasonic image of the intussusception of the child; the structured data feature extraction model is used for extracting the structured data features in the children intussusception laboratory data; and the feature fusion model is used for fusing text features, image features and structural data features, and mapping the prediction output of the child intussusception into probability distribution by using a normalization function after outputting high-dimensional features.
In the figure, S belongs to S and is input for text characteristics of a patient medical record, a medical history and the like, V belongs to V and is input for ultrasonic image characteristics, F belongs to F and is input for structured data, and therefore the probability that a prediction target D belongs to D is ensured to be an optimal target.
The multi-modal feature input is calculated through network structures such as a feature extraction layer, a feature fusion layer and the like to obtain high-dimensional features, and finally, the prediction output is mapped into probability distribution by using a normalization function, and the assumed prediction value is
Figure BDA0003407097280000061
Then:
Figure BDA0003407097280000062
as shown in figure 2, the child intussusception knowledge map is constructed as follows:
(1) relevant documents and experts' consensus on the special disease of the intussusception of the children are collected, and original medical record data and an ultrasonic image report of the intussusception of the children in nearly ten years in an emergency system of a certain three-patient hospital for the children are collected.
(2) And performing necessary data preprocessing on the collected data, including operations such as desensitization, cleaning and the like, then performing word segmentation and data set labeling, and dividing a training set and a test set according to a ratio of 7: 3.
(3) And for text data, extracting an entity set by adopting manual work and named entity recognition technology according to different structure types. The method uses BilSTM-CRF training to construct a children intussusception entity extraction model based on multi-mode fusion. Firstly, a BilSTM layer is used for acquiring a word vector representation related to the following text, and then necessary constraints are added to data labels through a CRF layer to ensure that the data labels are reasonable and effective, so that the deviation of the final result is reduced.
(4) And (3) applying the extracted entity to label the data set, and dividing the training set and the test set according to the proportion of 7: 3.
(5) The relation in the special disease of the intussusception of the children is extracted by adopting a BilSTM and Attention mechanism (BilSTM-Attention), and a multi-mode fusion-based extraction model of the intussusception relation of the children is constructed.
(6) After entity information and relations among different entities are extracted from the language segments, the extracted entity-relation-entity triples are stored in a Neo4j database, and are depicted into a graphical knowledge semantic network which is converted into a knowledge graph structure, so that the construction of the child intussusception special disease knowledge graph based on multi-mode fusion and the visualization graph are completed.
For the construction of the knowledge map of the intussusception of children, the most important two steps are entity extraction and relationship extraction. In view of the high efficiency advantage of a BilSTM-CRF model on a sequence labeling task, the entity in the special intussusception disease of the child is identified by adopting a method of combining a bidirectional Long-Term Memory cyclic Neural Network (BilSTM) with a Conditional Random Field (CRF), and the named entity identification is carried out on the unstructured text data set of the intussusception of the child by adopting the combined model.
The output of the BilSTM is acted by two Long-short Memory Neural networks (LSTMs) in opposite directions, and the information output at the current moment is determined by the output relation of the previous moment and the output of the later moment, so that the BilSTM can acquire more abundant context information. Inputting the word vectorized data of the special disease of the child intussusception into a BilSTM training model, and outputting the score of each entity label in the child intussusception by combining the information of the front time and the back time, namely the input of CRF. The conditional random field is a conditional probability model, and certain constraint is added to the output of the BilSTM, so that the accuracy of entity identification can be effectively improved. And the CRF obtains the output of the BilSTM layer, learns the constraint conditions from the training data to constrain the entity label and ensures the effectiveness of the entity label for the special disease of the child intussusception. Attention is similar to the human Attention mechanism, and the accuracy of relationship extraction of BilSTM in the special treatment of children intussusception can be improved by weighting the output of BilSTM.
The data organization structure of the knowledge graph is roughly divided into two layers: the first is the "top-down" conceptual schema layer; the second is a "bottom-up" concrete data layer construction method. In the process of constructing the child intestine-nested expert knowledge base knowledge map, the initial top-down mode is gradually changed into a mode of combining the two modes. The Neo4j database is a popular database at present, supports the ACID characteristic in transaction management, has high query performance and is internally provided with a visual UI; and an unstructured storage mode is adopted, so that the flexibility is high. And the Neo4j database is used for storing entities and relations in the special disease for children intussusception based on multi-mode fusion, a knowledge map of the special disease for children intussusception is quickly established, and the visualization of various entities and relations in the children intussusception is realized.
As shown in fig. 3, the ultrasound image quality control module model includes an image feature extraction model, an ultrasound image full-scan model, and a concentric circle vertical and horizontal cross-section detection model. V is left to V as the characteristic input of the ultrasonic image, y1Is the image characteristic after the ultrasonic image scans the whole model, y2For inspection through the transverse and longitudinal sections of the concentric circlesAnd measuring the image characteristics after the model is measured. And finally obtaining image characteristics through two-step quality control, and simultaneously outputting the quality evaluation result of the ultrasonic image.
The training process of the ultrasonic image quality control model is as follows:
firstly, making a data set:
(1) ultrasonic images of approximately ten years of intussusception of children in a certain trimethyl hospital ultrasonic imaging system were collected.
(2) Consulting related ultrasonic image files, consulting ultrasonic image experts, and determining the standard of the quality of the ultrasonic images of the intussusception of the children, wherein the cross section of the ultrasonic images is in a concentric circle shape or is called as a target ring shape; the longitudinal section is in a sleeve sign or a kidney-pseudosign and is represented as a symmetrical multi-layer structure; local intestinal wall blood flow signals increase.
As shown in fig. 4, (a) is a "concentric circle", "b is a" sleeve ", and (c) is a blood flow signal.
(3) Comprehensive ultrasonic image scanning is ensured, the kidney area is scanned, 2 sonographers screen standard ultrasonic images, and the 2 sonographers judge the standard images to be qualified at the same time and bring the standard images into subsequent research.
(4) And (3) carrying out image annotation of each standard on the standard section by 2 experts according to the ultrasonic standard section by using Pair intelligent annotation software, and bringing the annotated image into follow-up research.
Secondly, constructing an ultrasonic image quality control model
The ultrasonic image quality control module model comprises an image feature extraction model, an ultrasonic image full-scanning model and a concentric circle vertical and horizontal section detection model. The ultrasonic image full-scan model adopts an improved YOLOv5s network, and combines a convolution attention module CBAM and a Neck part of the YOLOv5s network, so that the feature extraction capability of the network is improved.
The improved Yolov5s network structure is shown in FIG. 5, and the Yolov5s model is composed of three parts, namely Backbone, Neck and output.
The Backbone part mainly adopts a CSP (Cross Stage partial) structure and an SPP (spatial Pyramid) structure. Two CSP structures, namely CSP1_ X and CSP2_ X, are designed in YOLOv5s and are respectively applied to a Backbone part and a Neck part, wherein X represents the number of residual assemblies. The X values of CSP1_ X in Backbone part of YOLOv5s are 1, 3 and 3 in turn, and the X values of CSP2_ X in Neck part are all 1. Feature maps in the SPP structure are Maxpool operations through three Pooling windows, and the results obtained respectively are concat in the channel dimension. SPP can increase the receptive field, helping to solve the alignment problem of anchors and feature maps.
The Neck part is composed of a CSP2_ X structure, an FPN (feature Pyramid) network + PAN (Path Aggregation network) structure, and the FPN + PAN structure is two combined feature Pyramid structures. The characteristic pyramid structure can improve the detection performance of the network on the targets with different scaling sizes. The FPN transmits and connects high-level semantic information in an up-sampling mode by adopting a top-down characteristic pyramid to obtain a predicted characteristic diagram; the PAN adopts a feature pyramid from bottom to top, and transmits the positioning features of the lower layer in a downsampling mode to enhance the position information.
The Output section contains the bounding box loss function and the Non-Maximum Suppression (NMS). The boundary box Loss uses a GIOU _ Loss function, and the Loss function increases the measurement mode of the intersection scale compared with the IOU _ Loss, so that the problem that the boundary box is not overlapped with the prediction box is solved. And estimating the recognition loss of the target detection rectangular frame, generating an object boundary frame and predicting the class information. In the target detection inference prediction stage, aiming at the screening of a plurality of detection boxes of the same category appearing on the same target, a weighted NMS operation is adopted, and the target box with higher score is reserved as a final target detection box.
The most critical part for extracting the features in the YOLOv5s network is in the backhaul, therefore, the method of the invention fuses the CBAM after the backhaul and before the features of the hack network are fused, the reason for doing so is that the YOLOv5s completes the feature extraction in the backhaul, the output is predicted on different feature maps after the hack feature fusion, and the CBAM performs attention reconstruction at the position and can play a role in starting from the top.
As shown in fig. 6, for a three-dimensional feature map of a certain layer in the CNN network, the CBAM sequentially infers a one-dimensional channel attention feature map and a two-dimensional space attention feature map from the three-dimensional feature map, and performs element-by-element multiplication, thereby finally obtaining an output feature map with the same dimension as the three-dimensional feature map. The CBAM focuses on both Spatial and Channel information, and reconstructs a feature map in the middle of a network through two sub-Module Channel Attention modules (CAM and Spatial Attention Modules (SAM)), thereby emphasizing important features, suppressing general features, and achieving the purpose of improving target detection effect.
The concentric circle longitudinal and transverse section detection model can detect whether the ultrasonic image comprises concentric circles or not, and further detect whether the ultrasonic image of the concentric circle longitudinal section exists or not if the ultrasonic image comprises the concentric circles.
The network structure main body of the concentric circle longitudinal and transverse section detection model consists of 3 stages: a trunk feature extraction Network, a Region generation Network (RPN), and a ROI Pooling layer. The feature extraction network adopts a VGG16 convolutional neural network, the VGG16 consists of 5 groups of convolution layers, each group of convolution has a pooling layer, and the total number of the convolution layers comprises 13 convolution layers, 13 activation layers and 5 pooling layers; and a jump connection layer is added between the third group of convolution and the fifth group of convolution of the convolutional neural network, and the shallow layer feature and the deep layer feature of the convolutional neural network are combined through the jump connection layer.
Third, model training
YOLOv5s uses GIoU Loss as a bounding Box regression Loss function to evaluate the distance between the Predicted Box (PB) and the true bounding Box (GT). The advantage of GIoU Loss is scale invariance, i.e. the similarity of PB and GT is independent of their spatial scale size. The problem with GIoU Loss is that when PB or GT is fully surrounded by the other, GIoU Loss completely degenerates to IoU Loss, since it heavily depends on IoU terms, resulting in too slow convergence speed in actual training and lower accuracy of the predicted bounding box. The CIoU Loss aims at the problems, and simultaneously considers the overlapping area, the central point distance and the length-width ratio of the PB and the GT. According to the method, the CIoU Loss is used as a regression Loss function of the target boundary box instead of the GIoU Loss, so that the regression speed of the boundary box is accelerated, the positioning precision is improved, and the problem of missed detection is solved. And finally finishing the children intussusception ultrasonic image quality control model by continuously training the optimization model.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A child intussusception diagnosis intelligent analysis system based on multi-modal fusion comprises a computer memory, a computer processor and a computer program which is stored in the computer memory and can be executed on the computer processor, and is characterized in that a trained child intussusception relation extraction model, a text feature extraction model, an ultrasonic image quality control model, a structured data feature extraction model and a feature fusion model are stored in the computer memory;
the child intussusception relation extraction model identifies entities in special cases of child intussusception by adopting a method of combining BiLSTM and CRF, and carries out named entity identification on a child intussusception unstructured text data set by adopting the combined model; meanwhile, the relationship in the special disease of the intussusception of the children is extracted by adopting a BilSTM and attention mechanism, so that a knowledge map of the intussusception of the children based on the chief complaint of abdominal pain is constructed;
the text feature extraction model is used for extracting text features in an original diagnosis medical record and an ultrasonic diagnosis report by combining a constructed children intussusception knowledge map and applying a natural language processing technology;
the ultrasonic image quality control model is used for extracting image characteristics in the ultrasonic image of the intussusception of the child;
the structural data feature extraction model is used for extracting structural data features in the data of the children intussusception laboratory;
the feature fusion model is used for fusing text features, image features and structural data features, and after high-dimensional features are output, the prediction output of the child intussusception is mapped into probability distribution by using a normalization function.
2. The intelligent analysis system for children intussusception diagnosis based on multi-modal fusion according to claim 1, wherein the construction process of the children intussusception knowledge map mainly comprises entity extraction and relation extraction, and specifically comprises the following steps:
(1) collecting relevant documents and expert consensus of the special disease of the intussusception of the children, and collecting original medical record data and an ultrasonic image report of the intussusception of the children in the last decade of a children hospital;
(2) preprocessing collected data, performing word segmentation and labeling, and dividing a training set and a test set according to a ratio of 7: 3;
(3) constructing a children intussusception entity extraction model by using BilSTM-CRF training; firstly, acquiring a word vector representation related to the following text by using a BilSTM layer, adding necessary constraint to data labeling by a CRF layer to ensure that the data labeling is reasonable and effective, and finally extracting to obtain an entity set;
(4) applying the extracted entity to mark a data set, and dividing a training set and a test set according to the proportion of 7: 3;
(5) extracting the relation in the special disease of the intussusception of the children by adopting a BilSTM and attention mechanism, and constructing an intussusception relation extraction model of the children;
(6) and after the entity information and the relations among different entities are extracted from the language segments, storing the extracted entity-relation-entity triple into a Neo4j database, and describing the triple into a graphical knowledge map structure to complete the construction and visualization of the children intussusception specific disease knowledge map.
3. The intelligent multi-modal fusion-based diagnostic assay system for intussusception of children as claimed in claim 1, wherein in step (2), said pretreatment comprises desensitization and cleaning operations.
4. The intelligent analysis system for children intussusception diagnosis based on multi-modal fusion according to claim 1, wherein the ultrasound image quality control model comprises an image feature extraction model, an ultrasound image full-scan model and a concentric circle vertical and horizontal section detection model; the ultrasonic image full-scan model adopts an improved YOLOv5s network, and a convolution attention module CBAM and a Neck part of the YOLOv5s network are fused, so that the feature extraction capability of the network is improved.
5. The intelligent analysis system for children intussusception diagnosis based on multi-modal fusion as claimed in claim 4, wherein during the training process of the ultrasound image quality control model, the training data set is prepared as follows:
collecting the ultrasonic images of the intussusception of the children in the last decade in an ultrasonic image system of the children hospital;
consulting related ultrasonic image files, consulting ultrasonic image experts, and determining the standards of the ultrasonic image quality of the intussusception of the children, wherein the ultrasonic image quality comprises that the transverse section is in a concentric circle sign, the longitudinal section is in a sleeve sign, and the blood flow signals of the local intestinal wall are increased;
ensuring that the ultrasonic image is scanned to the kidney area, and screening a standard ultrasonic image by 2 ultrasonic doctors;
and (3) carrying out image annotation of each standard on the standard section by using Pair intelligent annotation software by 2 experts according to the ultrasonic standard section, and taking the annotated image as a data set of a training image feature extraction model.
6. The intelligent analysis system for children intussusception diagnosis based on multi-modal fusion as claimed in claim 4, wherein in the process of training the ultrasound image full-scan model, CIoU Loss is adopted to replace GIoU Loss as a regression Loss function of the target bounding box, so that the regression speed of the bounding box is accelerated, the positioning precision is improved, and the problem of missed detection is solved.
7. The intelligent analysis system for children intussusception diagnosis based on multi-modal fusion according to claim 4, wherein when the ultrasonic image quality control model extracts image features in the ultrasonic images of children intussusception, firstly, the ultrasonic images are unified into the state that the kidney is located at the upper part or the lower part of the image through an image preprocessing technology, and whether the kidney is scanned or not is detected through the ultrasonic image full-scan model, and a kidney region detection result is returned; and further detecting whether the ultrasonic image scans the cross section of the concentric circle by using a concentric circle longitudinal and transverse section detection model, and if the cross section of the concentric circle is detected, continuously detecting whether the ultrasonic image scans the longitudinal section of the concentric circle.
8. The system of claim 1, wherein the feature fusion model is configured to map the predicted outcome of the child intussusception into a probability distribution using a normalization function Softmax after outputting the high-dimensional features, assuming the predicted value is Softmax
Figure FDA0003407097270000031
Then:
Figure FDA0003407097270000032
wherein S belongs to S as text characteristic input, V belongs to V as ultrasonic image characteristic input, F belongs to F as structured data input, so as to ensure that the probability of predicting a target D belongs to D is an optimal target.
CN202111520413.9A 2021-12-13 2021-12-13 Intelligent analysis system for children intussusception diagnosis based on multi-mode fusion Active CN114188021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111520413.9A CN114188021B (en) 2021-12-13 2021-12-13 Intelligent analysis system for children intussusception diagnosis based on multi-mode fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111520413.9A CN114188021B (en) 2021-12-13 2021-12-13 Intelligent analysis system for children intussusception diagnosis based on multi-mode fusion

Publications (2)

Publication Number Publication Date
CN114188021A true CN114188021A (en) 2022-03-15
CN114188021B CN114188021B (en) 2022-06-10

Family

ID=80543514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111520413.9A Active CN114188021B (en) 2021-12-13 2021-12-13 Intelligent analysis system for children intussusception diagnosis based on multi-mode fusion

Country Status (1)

Country Link
CN (1) CN114188021B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913169A (en) * 2022-06-10 2022-08-16 浙江大学 Neonatal necrotizing enterocolitis screening system
CN115240844A (en) * 2022-07-15 2022-10-25 北京医准智能科技有限公司 Training method and device for auxiliary diagnosis model, electronic equipment and storage medium
CN115670505A (en) * 2022-10-24 2023-02-03 华南理工大学 Ultrasonic scanning control system based on multi-mode fusion
CN117012373A (en) * 2023-10-07 2023-11-07 广州市妇女儿童医疗中心 Training method, application method and system of grape embryo auxiliary inspection model
CN117455890A (en) * 2023-11-20 2024-01-26 浙江大学 Child intussusception air enema result prediction device based on improved integrated deep learning
CN117894454A (en) * 2024-01-29 2024-04-16 脉得智能科技(无锡)有限公司 Sarcopenia diagnosis method and device and electronic equipment
CN117995368A (en) * 2024-01-31 2024-05-07 智远汇壹(苏州)健康医疗科技有限公司 Individualized medical image diagnosis quality assurance method and system based on follow-up data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110077516A1 (en) * 2009-09-30 2011-03-31 Yasuhiko Abe Ultrasonic diagnostic equipment and ultrasonic image processing apparatus
CN106308848A (en) * 2015-07-10 2017-01-11 通用电气公司 Method and device for measuring ultrasonic image
CN110222201A (en) * 2019-06-26 2019-09-10 中国医学科学院医学信息研究所 A kind of disease that calls for specialized treatment knowledge mapping construction method and device
CN111916207A (en) * 2020-08-07 2020-11-10 杭州深睿博联科技有限公司 Disease identification method and device based on multi-modal fusion
CN112309576A (en) * 2020-09-22 2021-02-02 江南大学 Colorectal cancer survival period prediction method based on deep learning CT (computed tomography) image omics
WO2021062366A1 (en) * 2019-09-27 2021-04-01 The Brigham And Women's Hospital, Inc. Multimodal fusion for diagnosis, prognosis, and therapeutic response prediction
CN112992317A (en) * 2021-05-10 2021-06-18 明品云(北京)数据科技有限公司 Medical data processing method, system, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110077516A1 (en) * 2009-09-30 2011-03-31 Yasuhiko Abe Ultrasonic diagnostic equipment and ultrasonic image processing apparatus
CN106308848A (en) * 2015-07-10 2017-01-11 通用电气公司 Method and device for measuring ultrasonic image
CN110222201A (en) * 2019-06-26 2019-09-10 中国医学科学院医学信息研究所 A kind of disease that calls for specialized treatment knowledge mapping construction method and device
WO2021062366A1 (en) * 2019-09-27 2021-04-01 The Brigham And Women's Hospital, Inc. Multimodal fusion for diagnosis, prognosis, and therapeutic response prediction
CN111916207A (en) * 2020-08-07 2020-11-10 杭州深睿博联科技有限公司 Disease identification method and device based on multi-modal fusion
CN112309576A (en) * 2020-09-22 2021-02-02 江南大学 Colorectal cancer survival period prediction method based on deep learning CT (computed tomography) image omics
CN112992317A (en) * 2021-05-10 2021-06-18 明品云(北京)数据科技有限公司 Medical data processing method, system, equipment and medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
J.M. SMITH; D.J. ALTON; J. COUTLEE: "Controlled reduction of intussusceptions: a case study in effective medical device technology", 《PROCEEDINGS OF 17TH INTERNATIONAL CONFERENCE OF THE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY》 *
刘雪薇等: "基于可视化的新生儿坏死性小肠结肠炎研究分析", 《卫生职业教育》 *
曾红艳等: "回盲部超声解剖在诊断小儿肠套叠中的应用价值", 《遵义医学院学报》 *
胡靖等: "高频超声对肠套叠患儿诊断及治疗的指导价值", 《深圳中西医结合杂志》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913169A (en) * 2022-06-10 2022-08-16 浙江大学 Neonatal necrotizing enterocolitis screening system
CN115240844A (en) * 2022-07-15 2022-10-25 北京医准智能科技有限公司 Training method and device for auxiliary diagnosis model, electronic equipment and storage medium
CN115670505A (en) * 2022-10-24 2023-02-03 华南理工大学 Ultrasonic scanning control system based on multi-mode fusion
CN115670505B (en) * 2022-10-24 2024-09-24 华南理工大学 Ultrasonic scanning control system based on multi-mode fusion
CN117012373A (en) * 2023-10-07 2023-11-07 广州市妇女儿童医疗中心 Training method, application method and system of grape embryo auxiliary inspection model
CN117012373B (en) * 2023-10-07 2024-02-23 广州市妇女儿童医疗中心 Training method, application method and system of grape embryo auxiliary inspection model
CN117455890A (en) * 2023-11-20 2024-01-26 浙江大学 Child intussusception air enema result prediction device based on improved integrated deep learning
CN117455890B (en) * 2023-11-20 2024-05-31 浙江大学 Child intussusception air enema result prediction device based on improved integrated deep learning
CN117894454A (en) * 2024-01-29 2024-04-16 脉得智能科技(无锡)有限公司 Sarcopenia diagnosis method and device and electronic equipment
CN117995368A (en) * 2024-01-31 2024-05-07 智远汇壹(苏州)健康医疗科技有限公司 Individualized medical image diagnosis quality assurance method and system based on follow-up data
CN117995368B (en) * 2024-01-31 2024-07-09 智远汇壹(苏州)健康医疗科技有限公司 Individualized medical image diagnosis quality assurance method and system based on follow-up data

Also Published As

Publication number Publication date
CN114188021B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN114188021B (en) Intelligent analysis system for children intussusception diagnosis based on multi-mode fusion
Lin et al. AANet: Adaptive attention network for COVID-19 detection from chest X-ray images
CN113241135A (en) Disease risk prediction method and system based on multi-mode fusion
Gu et al. Classification of diabetic retinopathy severity in fundus images using the vision transformer and residual attention
Beddiar et al. Automatic captioning for medical imaging (MIC): a rapid review of literature
CN113284572B (en) Multi-modal heterogeneous medical data processing method and related device
WO2022110525A1 (en) Comprehensive detection apparatus and method for cancerous region
CN115830017B (en) Tumor detection system, method, equipment and medium based on image-text multi-mode fusion
CN111950595A (en) Liver focus image processing method, system, storage medium, program, and terminal
Xiao et al. Application and progress of artificial intelligence in fetal ultrasound
Wang et al. Cataract detection based on ocular B-ultrasound images by collaborative monitoring deep learning
US20220405933A1 (en) Systems, methods, and apparatuses for implementing annotation-efficient deep learning models utilizing sparsely-annotated or annotation-free training
Hu et al. Expert knowledge-aware image difference graph representation learning for difference-aware medical visual question answering
Hou et al. Periphery-aware COVID-19 diagnosis with contrastive representation enhancement
Qiao et al. SPReCHD: Four-chamber semantic parsing network for recognizing fetal congenital heart disease in medical metaverse
CN118098564A (en) Automatic auxiliary diagnosis method based on multi-mode LLM and model construction method thereof
Yumeng et al. Pneumonia Detection in chest X-rays: A deep learning approach based on ensemble RetinaNet and Mask R-CNN
Tan et al. Bayesian inference and dynamic neural feedback promote the clinical application of intelligent congenital heart disease diagnosis
CN116759076A (en) Unsupervised disease diagnosis method and system based on medical image
Guo et al. LesionTalk: Core data extraction and multi-class lesion detection in IoT-based intelligent healthcare
CN116344028A (en) Method and device for automatically identifying lung diseases based on multi-mode heterogeneous data
CN116664592A (en) Image-based arteriovenous blood vessel separation method and device, electronic equipment and medium
Niu et al. A fine‐to‐coarse‐to‐fine weakly supervised framework for volumetric SD‐OCT image segmentation
CN113936775A (en) Fetal heart ultrasonic standard tangent plane extraction method based on human-in-loop intelligent auxiliary navigation
Hassan et al. Analysis of multimodal representation learning across medical images and reports using multiple vision and language pre-trained models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant