CN114266915A - Artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method - Google Patents

Artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method Download PDF

Info

Publication number
CN114266915A
CN114266915A CN202010960254.3A CN202010960254A CN114266915A CN 114266915 A CN114266915 A CN 114266915A CN 202010960254 A CN202010960254 A CN 202010960254A CN 114266915 A CN114266915 A CN 114266915A
Authority
CN
China
Prior art keywords
ultrasonic
nasointestinal tube
nasointestinal
identification
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010960254.3A
Other languages
Chinese (zh)
Inventor
叶瑞忠
彭成忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Provincial Peoples Hospital
Original Assignee
Zhejiang Provincial Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Provincial Peoples Hospital filed Critical Zhejiang Provincial Peoples Hospital
Priority to CN202010960254.3A priority Critical patent/CN114266915A/en
Publication of CN114266915A publication Critical patent/CN114266915A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a method for identifying and positioning a nasointestinal tube based on artificial intelligence, which comprises the following steps: collecting ultrasonic images containing key inspection positions of each digestive tract and ultrasonic characteristic image data of different nasal intestinal tracts, classifying according to the key inspection positions of the different digestive tracts and the ultrasonic characteristics of the different nasal intestinal tracts, converting the classified information data into ultrasonic image structured data through marking, and establishing an ultrasonic image database as a training set; training by using structural data of the nasointestinal tube in an ultrasonic image database to obtain an intelligent nasointestinal tube identification model; collecting ultrasonic image data containing the nasointestinal canal acoustic image again to serve as a verification set, and analyzing by using the trained nasointestinal canal intelligent identification model to obtain an analysis result comprising identification and classification prediction probability; outputting and analyzing the prediction analysis result, and continuously collecting ultrasonic image data of the nasointestinal tube ultrasonic image in the later period; and integrating the intelligent recognition model of the nasointestinal tube into the ultrasonic terminal.

Description

Artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method
Technical Field
The invention belongs to the technical field of intelligent medical treatment, and particularly relates to a method and a system for identifying and positioning a nasointestinal tube based on artificial intelligence.
Background
The early-stage enteral nutrition (within 48 hours after hospitalization) of critically ill and chronic patients is well recognized in the medical field, and the early-stage enteral nutrition can provide nutrient substrates and improve the intestinal mucosa barrier and the immune function, so that the systemic immune function is improved, the risk of secondary infection is reduced, the hospitalization time is shortened, the medical cost is reduced, and the prognosis is obviously improved. The nasointestinal tube is one of the important nutrient pathways, and is very commonly used in clinic. After the tube placement is finished, how to quickly realize positioning evaluation and start enteral nutrition as early as possible is a clinically important problem. At present, in the nose and intestine tube placing and positioning process, the commonly used methods comprise abdominal X-ray examination, suction fluid properties and PH value, endoscope, CT and the like, but have the defects of strong ionizing radiation or subjectivity and time consumption.
Ultrasound has the advantages of convenience, no wound and three-dimensional space, and is increasingly applied to the placement and positioning of nasointestinal tubes. However, in the practical application process, due to the interference of poor operation skill of examiners, insufficient understanding of anatomy and ultrasonic principles, diversity of naso-intestinal tract symptoms, operation, gas or artifacts and the like, the accuracy of ultrasonic evaluation results is reduced, clinical requirements are difficult to meet, and the clinical application will be reduced. In recent years, with the increasing maturity of Artificial Intelligence (AI) and the wider application in the medical imaging field, how to overcome the defects of operator dependence and patient dependence in the nasointestinal canal ultrasonic positioning process by means of the AI technology, finally exploring an ultrasonic exploration means with high objectivity and accuracy will have a greater clinical value, and can effectively relieve and solve the shortage of medical resources and the waste of medical resources.
Disclosure of Invention
In view of the technical problems, the invention provides an artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method, which can improve diagnosis and treatment efficiency and reduce unnecessary waste of medical resources.
In order to solve the technical problems, the invention adopts the following technical scheme:
a nasointestinal tube ultrasonic identification and positioning method based on artificial intelligence comprises the following steps:
collecting ultrasonic images containing key inspection positions of each digestive tract and ultrasonic characteristic image data of different nasal intestinal tracts, classifying according to the key inspection positions of the different digestive tracts and the ultrasonic characteristics of the different nasal intestinal tracts, converting the classified information data into ultrasonic image structured data through marking, and establishing an ultrasonic image database as a training set;
training by using structural data of the nasointestinal tube in an ultrasonic image database to obtain an intelligent nasointestinal tube identification model;
collecting ultrasonic image data containing the nasointestinal canal acoustic image again to serve as a verification set, and analyzing by using the trained nasointestinal canal intelligent identification model to obtain an analysis result comprising identification and classification prediction probability;
outputting and analyzing the prediction analysis result, continuously collecting ultrasonic image data of the nasointestinal tube ultrasonic image in the later period, and continuously training and optimizing the nasointestinal tube intelligent identification model;
the intelligent identification of the nasointestinal tube is integrated into the model ultrasonic terminal, so that the intelligent identification and the positioning of the nasointestinal tube are realized.
Preferably, the images of key sites of the digestive tract include the cervical esophagus, the lower esophageal cardia portion, the antrum portion, the pylorus, and the duodenal bulbus and the duodenal horizontal portion.
Preferably, the nasointestinal tube ultrasonic feature image comprises a double-track feature, an equal sign feature, a bright band feature, a five-line feature, a bead-like mural gas feature and a short-axis sound shadow feature.
Preferably, the intelligent nasointestinal tube recognition model is integrated into the ultrasonic terminal by adopting an edge calculation technology.
Preferably, the classification prediction includes sharpness, blur, blurriness of display, in-place and out-of-place.
Preferably, an error correction function is further included.
The invention has the following beneficial effects: the application of the artificial intelligence technology in the ultrasonic image has the characteristics of intelligence, rapidness, accuracy and objectivity, greatly reduces the dependence on the technical maturity of an operator, can provide useful information for clinic by utilizing the technology in various application scenes, improves the diagnosis and treatment efficiency and reduces unnecessary waste of medical resources.
Drawings
FIG. 1 is a schematic diagram of a method for ultrasonic identification and location of a nasointestinal tube based on artificial intelligence according to an embodiment of the present invention;
fig. 2 to 14 are schematic views of ultrasound images of various parts.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic diagram of a method for identifying and positioning a nasointestinal tube based on artificial intelligence is shown, which includes the following steps:
collecting ultrasonic images containing key inspection positions of each digestive tract and ultrasonic characteristic image data of different nasal intestinal tracts, classifying according to the key inspection positions of the different digestive tracts and the ultrasonic characteristics of the different nasal intestinal tracts, converting the classified information data into ultrasonic image structured data through marking, and establishing an ultrasonic image database as a training set; wherein the key examination site images of each digestive tract comprise a cervical esophagus, a lower esophagus-cardiac part, a antrum part, a pylorus, a duodenal bulbar part and a duodenal horizontal part. The nasointestinal tube ultrasonic characteristic image comprises a double-track sign, an equal sign, a bright band sign, a five-line sign, a bead-like mural gas sign and a short-axis sound shadow sign.
Training by using structural data of the nasointestinal tube in an ultrasonic image database to obtain an intelligent nasointestinal tube identification model;
collecting ultrasonic image data containing the nasointestinal canal acoustic image again to serve as a verification set, and analyzing by using the trained nasointestinal canal intelligent identification model to obtain an analysis result comprising identification and classification prediction probability; categorical predictions include clear, fuzzy, unclear, in-place, and not-in-place.
Outputting and analyzing the prediction analysis result, continuously collecting ultrasonic image data of the nasointestinal tube ultrasonic image in the later period, and continuously training and optimizing the nasointestinal tube intelligent identification model;
the intelligent recognition model of the nasointestinal tube is integrated into the ultrasonic terminal, so that the nasointestinal tube is recognized and positioned intelligently. In a specific example, the intelligent identification model of the nasointestinal tube is integrated into the ultrasonic terminal, and an edge calculation technology can be adopted. The ultrasound terminal may include a large ultrasound instrument, a portable ultrasound instrument, an ultrasound robot, or a palm ultrasound, etc.
On the basis of the above embodiment, an error correction function is further included. If the section of the operator is incorrect, the system can automatically remind (for example, a red color marking mode is adopted) to guide the operator to print a standard section (a green color marking mode). Thus, the operator can continuously correct the error and is a process for automatically learning and improving.
The following will explain the implementation of the present invention with reference to further embodiments.
(1) The schematic of the acquisition of the ultrasonic image of the key part of the gastrointestinal tract is as follows:
1) the neck esophagus: 1 short axis was acquired as shown in FIG. 2; 1 long-axis image in JPG format, as shown in fig. 3.
2) Lower esophagus-cardiac part: images in JPG format with 1 long axis were acquired as shown in fig. 4.
3) Antral part of the stomach: 1 short axis was acquired as shown in fig. 5; 1 long-axis image in JPG format, as shown in fig. 6.
4) Gastric pylorus and duodenal bulb: 1 image in JPG format was acquired based on actual gastrointestinal status, as shown in fig. 7.
5) Level of duodenum: images in JPG format with 1 long axis were acquired as shown in fig. 8.
(2) Performing ultrasonic image and image acquisition on the nasointestinal tube: a double sign or equal sign, as shown in fig. 9; bright band symbol, shown in fig. 10; the five lines are marked as shown in FIG. 11; beading-like signs of mural qi, as shown in FIG. 12; a bar shading feature, as shown in FIG. 13; short axis sonography, as shown in FIG. 14, 1 image in JPG format was acquired each time according to the displayed feature.
(3) The ultrasonic images are collected by senior ultrasonic doctors, contain relevant characteristic information, are classified (see table 1) and stored, and an ultrasonic image database is constructed and comprises a training set, a verification set and a test set.
TABLE 1 ultrasonic image characteristic information of closed abdominal injury
Figure BDA0002680293750000051
(4) Gastrointestinal tract key part and nasointestinal tube detection model
Feature extraction, Region candidate Networks (RPN) and Fast RCNN are adopted for realization. Firstly, obtaining a candidate region containing a gastrointestinal tract key part and a nasointestinal tube through RPN, and then mapping the candidate region onto an extracted feature map to obtain corresponding features of the candidate region. And identifying the characteristics of each region by adopting Fast RCNN, and judging whether the region contains the target interest. The fast RCNN can generate more false positives for ensuring high sensitivity of ultrasonic image data processing, and is solved by constructing a gastrointestinal tract key part and a nasointestinal tube identification network. When data and a corresponding label are input, if the candidate comprises a gastrointestinal tract key part and a nasointestinal tube, the gastrointestinal tract key part and the nasointestinal tube are used as a positive sample, and the label is set to be 1; if the candidate does not include the gastrointestinal tract key part and the nasointestinal tube, the candidate is taken as a negative sample, and the label is set as 0. And constructing a proper network, and training the network by using algorithms such as gradient descent and the like. After the detection of the key parts of the gastrointestinal tract and the nose and intestine tube symptom areas is completed, the target interested areas are obtained, so that the subsequent algorithm can be concentrated in the areas, and the interference of useless information is avoided.
(5) Intelligent recognition model for gastrointestinal tract key parts and nasointestinal tube ultrasonic images
The network is based on the Dense Block and comprises (i) convolution layers: mainly by performing convolution operation on the characteristic diagram, for a given input characteristic diagram Vi,j,k(representing data in the jth row and kth column on the ith channel on the feature map), convolution kernel Ki,l,m,n(the ith channel connecting the output and the 1 st channel of the output, m, n respectively represent the inputThe deviation of the input and output on the row and column), if the step length is s, the output is Zi,j,k=∑l,m,nVl,(j-1)s+m,(k-1)s+nKl,i,m,n(ii) a A pooling layer: mainly to reduce the size of the feature map and to make the feature map invariant, for a given input feature map Vi,j,kIf the step length is s and the window width is 1, the output is Zi,j,k=Operationm<l,n<l{Vi,(j-1)s+m,(k-1)s+nUsually Operation is averaging or maximization; ③ full connecting layer: for combining the previously extracted features, for a given input feature map Vi,j,kThe output of which is Zl=∑i,j,kVi,j, kWi,j,k,l+bl
ReLU (i.e., max (0, x)), Softmax (i.e., Softmax) are added after the convolutional layer and the full link layer
Figure BDA0002680293750000061
) And (5) activating functions to improve the nonlinear degree of the network.
Each layer of the Dense Block is connected, and the characteristic diagrams output by all the layers in front are input into the next layer in a channel merging mode, namely xl=Hl([x0,x1,...,xl-1]) Wherein [ x ]0,x1,…,xl-1]Representing the connection of the output profiles of layers 0 to l-1, HlTransformations include Batch Normalization, ReLU, and convolution. Therefore, the Dense Block is more Dense in the connection mode, the connection mode solves the problem of gradient dispersion, the feature propagation is strengthened, and the feature reuse is strengthened. Meanwhile, by designing the convolution operation with the convolution kernel size of 1, the number of input characteristic channels can be reduced, the dimension can be reduced, the calculated amount can be reduced, and the characteristics of each channel can be fused. In order to improve the identification accuracy and the algorithm generalization capability, data expansion such as rotation, turning, clipping, distortion and the like can be performed. The DenseNet formed by combining a plurality of Dense blocks has strong generalization performance and still has high recognition rate to small data volume, so that the DenseNet is suitable for learning of medical image data. Construction ofAnd (3) well designing a proper model and a proper loss function, inputting the processed ultrasonic image data and a corresponding label, training the network by utilizing algorithms such as gradient descent and the like, and accelerating the training process and solving the problems of overfitting and the like by using the techniques such as split dropout, batch normalization and the like.
(6) Segmentation model
Designing a network structure suitable for gastrointestinal tract key parts and naso-intestinal tube segmentation by referring to network structures such as U-Net, DenseNet, FPN and the like, wherein the main idea is to utilize the characteristics of a plurality of layers and support characteristic reuse; the connection between the front layer and the rear layer is shortened as much as possible, and the characteristic propagation is strengthened; the gradient propagation in the training process is easier by optimizing the model structure; reduce the number of parameters and prevent overfitting. The network structure mainly comprises basic units such as a convolution layer, a pooling layer, an upper sampling layer, an activation layer (ReLU and the like) and a batch normalization layer. The ultrasonic image outputs a probability map of free liquid through a network effect, and a Conditional Random Field (CRF) is used for further detail optimization, so that the segmentation result is more precise and accurate.
In the image segmentation process, pooling can reduce the size of an image and increase the receptive field, upsampling can enlarge the size of the image, and some information loss exists in the process of reducing the size and then increasing the size. By adopting hole convolution (aperture convolution), the receptive field is enlarged by adding holes, the original convolution kernel of 3x3 has a receptive field of 5x5 (scaled rate 2) or more under the same parameters and calculation amount, thereby solving the problems of reduced image resolution and information loss in the image semantic segmentation process. The actual size of the hole convolution is k + (k-1) (rate-1) k, where k is the original convolution kernel size, the interval between adjacent weights is rate-1, and the rate of normal convolution defaults to 1. Performing a hole convolution on the input feature map x:
Figure BDA0002680293750000071
y is on the output profile, i is each position on the output profile, and w is the convolution filter.
To capture context information at multiple scales, parallel hole space convolution pooling pyramids (ASPPs) with different rates are applied. Applying the depth separable convolution to the ASPP and decoder modules, with reference to the deplab v3+ and Xception models, can form a faster, more powerful encoder-decoder network to obtain sharp object boundaries.
Through the arrangement, the technical effect can be intelligently, quickly, accurately and objectively realized, the dependence on the technical maturity of an operator is greatly reduced, the technology can be utilized in various application scenes to provide useful information for clinic, the diagnosis and treatment efficiency is improved, and unnecessary waste of medical resources is reduced.
It is to be understood that the exemplary embodiments described herein are illustrative and not restrictive. Although one or more embodiments of the present invention have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims (6)

1. An artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method is characterized by comprising the following steps:
collecting ultrasonic images containing key inspection positions of each digestive tract and ultrasonic characteristic image data of different nasal intestinal tracts, classifying according to the key inspection positions of the different digestive tracts and the ultrasonic characteristics of the different nasal intestinal tracts, converting the classified information data into ultrasonic image structured data through marking, and establishing an ultrasonic image database as a training set;
training by using structural data of the nasointestinal tube in an ultrasonic image database to obtain an intelligent nasointestinal tube identification model;
collecting ultrasonic image data containing the nasointestinal canal acoustic image again to serve as a verification set, and analyzing by using the trained nasointestinal canal intelligent identification model to obtain an analysis result comprising identification and classification prediction probability;
outputting and analyzing the prediction analysis result, continuously collecting ultrasonic image data of the nasointestinal tube ultrasonic image in the later period, and continuously training and optimizing the nasointestinal tube intelligent identification model;
the intelligent recognition model of the nasointestinal tube is integrated into the ultrasonic terminal, so that the nasointestinal tube is recognized and positioned intelligently.
2. The artificial intelligence based nasointestinal tube ultrasonic identification and positioning method as claimed in claim 1, wherein each digestive tract key inspection site image comprises a cervical esophagus, a lower esophageal cardia part, a antrum part, a pylorus, a duodenal bulbar part and a duodenal horizontal part.
3. The artificial intelligence based nasointestinal tube ultrasonic identification and localization method according to claim 1, wherein the nasointestinal tube ultrasonic feature image comprises a double track feature, an equal sign feature, a bright band feature, a five-line feature, a bead-like mural gas and a short axis sound shadow feature.
4. The artificial intelligence based nasointestinal tube ultrasonic identification and positioning method as claimed in any one of claims 1 to 3, wherein the nasointestinal tube intelligent identification model is integrated into the ultrasonic terminal by adopting an edge computing technology.
5. The artificial intelligence based nasointestinal tube ultrasonic identification and location method according to any one of claims 1 to 3, wherein the classification prediction comprises clear, fuzzy, unclear to show, in-place and not-in-place.
6. The artificial intelligence based nasointestinal tube ultrasonic identification and positioning method as claimed in any one of claims 1 to 3, further comprising an error correction function.
CN202010960254.3A 2020-09-14 2020-09-14 Artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method Pending CN114266915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010960254.3A CN114266915A (en) 2020-09-14 2020-09-14 Artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010960254.3A CN114266915A (en) 2020-09-14 2020-09-14 Artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method

Publications (1)

Publication Number Publication Date
CN114266915A true CN114266915A (en) 2022-04-01

Family

ID=80824072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010960254.3A Pending CN114266915A (en) 2020-09-14 2020-09-14 Artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method

Country Status (1)

Country Link
CN (1) CN114266915A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115064264A (en) * 2022-06-20 2022-09-16 四川大学华西医院 Method and system for real-time evaluation of gastrointestinal function and movement rhythm based on ultrasound

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115064264A (en) * 2022-06-20 2022-09-16 四川大学华西医院 Method and system for real-time evaluation of gastrointestinal function and movement rhythm based on ultrasound

Similar Documents

Publication Publication Date Title
CN109670510B (en) Deep learning-based gastroscope biopsy pathological data screening system
CN112489061B (en) Deep learning intestinal polyp segmentation method based on multi-scale information and parallel attention mechanism
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN109741317B (en) Intelligent evaluation method for medical image
CN110021425B (en) Comparison detector, construction method thereof and cervical cancer cell detection method
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
CN110555836A (en) Automatic identification method and system for standard fetal section in ultrasonic image
US20220172828A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
CN111915573A (en) Digestive endoscopy focus tracking method based on time sequence feature learning
CN110600122A (en) Digestive tract image processing method and device and medical system
US20240070858A1 (en) Capsule endoscope image recognition method based on deep learning, and device and medium
CN110729045A (en) Tongue image segmentation method based on context-aware residual error network
CN114331971A (en) Ultrasonic endoscope target detection method based on semi-supervised self-training
CN111145200B (en) Blood vessel center line tracking method combining convolutional neural network and cyclic neural network
CN113781489B (en) Polyp image semantic segmentation method and device
CN111724401A (en) Image segmentation method and system based on boundary constraint cascade U-Net
CN111260639A (en) Multi-view information-collaborative breast benign and malignant tumor classification method
CN111079901A (en) Acute stroke lesion segmentation method based on small sample learning
CN116579982A (en) Pneumonia CT image segmentation method, device and equipment
CN115205520A (en) Gastroscope image intelligent target detection method and system, electronic equipment and storage medium
CN114266915A (en) Artificial intelligence-based nasointestinal tube ultrasonic identification and positioning method
CN112419246B (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
CN116468682A (en) Magnetic control capsule endoscope image stomach anatomy structure identification method based on deep learning
CN116052871A (en) Computer-aided diagnosis method and device for cervical lesions under colposcope
Li et al. Ulcer recognition in capsule endoscopy images by texture features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination