CN112801958A - Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium - Google Patents

Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium Download PDF

Info

Publication number
CN112801958A
CN112801958A CN202110061525.6A CN202110061525A CN112801958A CN 112801958 A CN112801958 A CN 112801958A CN 202110061525 A CN202110061525 A CN 202110061525A CN 112801958 A CN112801958 A CN 112801958A
Authority
CN
China
Prior art keywords
image
ultrasonic endoscope
deep learning
interstitial
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110061525.6A
Other languages
Chinese (zh)
Inventor
李晓宇
杨新天
王晗
董蒨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Affiliated Hospital of University of Qingdao
Original Assignee
Affiliated Hospital of University of Qingdao
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Affiliated Hospital of University of Qingdao filed Critical Affiliated Hospital of University of Qingdao
Priority to CN202110061525.6A priority Critical patent/CN112801958A/en
Publication of CN112801958A publication Critical patent/CN112801958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Geometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention belongs to the technical field of medical artificial intelligence, and discloses an ultrasonic endoscope, an artificial intelligence auxiliary identification method, a system, a terminal and a medium, wherein image data of an ultrasonic endoscope video or a static image in an ultrasonic endoscope monitor is acquired in real time through an image acquisition module, and an image frame is intercepted; the image segmentation module is used for segmenting and extracting the tumor part image by adopting artificial image segmentation or utilizing a deep learning segmentation model based on the acquired image; unifying the size of the segmented images through an image conversion module, and carrying out normalization processing to obtain a modular image, namely a standardized focus position image; dividing the modularized picture into interstitial tumor images or leiomyoma images by using a deep learning classification model through an image classification module; and outputting the image classification result through an output module. The method provided by the invention can effectively improve the image identification accuracy and reduce misdiagnosis.

Description

Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium
Technical Field
The invention belongs to the technical field of medical artificial intelligence, and particularly relates to an ultrasonic endoscope, an artificial intelligence auxiliary identification method, a system, a terminal and a medium, in particular to an artificial intelligence auxiliary identification method under the ultrasonic endoscope of interstitial tumor and leiomyoma.
Background
At present, the ultrasonic endoscope can clearly display the level of the gastrointestinal tract wall and the origin level of submucosal lesions, is the imaging technology with the highest accuracy rate for diagnosing gastrointestinal submucosal tumors at present, and is widely applied to screening and diagnosing gastrointestinal submucosal tumors.
Stromal and leiomyoma are the most common tumors in the submucosal tumors of the gastrointestinal tract. Stromal tumors are malignant or potentially malignant tumors requiring close follow-up and surgical treatment; leiomyoma is considered a benign tumor and does not generally require surgical resection. However, the imaging performances of the interstitial tumor and the smooth fibroid are very similar, and the current imaging technology cannot effectively identify the interstitial tumor and the smooth fibroid even though the current imaging technology is an ultrasonic endoscope, so that a large number of patients are misdiagnosed. Endoscopic biopsy of human tissue presents the problems of difficult access to smaller lesions (< 20mm in diameter) and low diagnostic rate, with the risk of bleeding and perforation of the digestive tract. Some interstitial tumor patients who are misdiagnosed, lack of clear diagnosis, need close follow-up and timely operation delay the diagnosis and treatment time; some misdiagnosed patients with leiomyoma who do not need surgery repeatedly carry out gastroscopy and unnecessary surgery treatment, thus bearing physiological pain and risks of surgery and increasing economic burden of patients.
The application of artificial intelligence, particularly computer vision technology, in the medical field has been increasing in recent years, and more accurate diagnosis rate than that of human imaging experts can be obtained by interpreting medical image pictures through artificial intelligence. However, no artificial intelligence diagnosis application under the ultrasonic endoscope exists at present, and particularly the application of diagnosing interstitial tumor and leiomyoma is available.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) the existing imaging detection or identification method can not accurately identify interstitial tumor and smooth myoma images, and the misdiagnosis rate is high.
(2) The operation of the human tissue biopsy method under the endoscope is difficult, and particularly, the sample material obtaining qualification rate for small pathological changes (the diameter is less than 20mm) is low, so that the diagnosis rate is low, and the diagnosis conclusion is not clear; meanwhile, biopsy is an invasive test with the risk of bleeding, perforation of the digestive tract.
(3) At present, artificial intelligence diagnosis application under ultrasonic endoscopy, particularly the identification of interstitial tumor, is not available. And applications for leiomyoma images.
The difficulty in solving the above problems and defects is:
(1) a large amount of case data having high-quality ultrasonic endoscope pictures and simultaneously having definite pathological diagnosis needs to be collected as a database for training an artificial intelligence model.
(2) A professional ultrasonic endoscope doctor is required to mark the pathological changes of the ultrasonic endoscope picture for training and verifying the artificial intelligent model.
(3) And selecting an artificial intelligence model with good diagnosis efficiency and programming a foreground interface of the artificial intelligence system through debugging parameters for many times.
(4) It is necessary to pass tests of a sufficient number of cases to further verify the reliability of the artificial intelligence system.
The significance of solving the problems and the defects is as follows:
(1) the diagnosis accuracy of gastrointestinal stromal tumor and leiomyoma under ultrasonic endoscopy is improved.
(2) The interstitial tumor and leiomyoma patients avoid invasive biopsy before operation and long-term endoscopic follow-up due to unknown diagnosis.
(3) The diagnosis accuracy of gastrointestinal stromal tumor and leiomyoma under the ultrasonic endoscopy is improved, and delay of diagnosis and treatment time or unnecessary operation is avoided.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an ultrasonic endoscope, an artificial intelligence auxiliary identification method, a system, a terminal and a medium.
The invention is realized in such a way that the method for artificially and intelligently identifying the interstitial tumor and the leiomyoma under the ultrasonic endoscope comprises the following steps:
the method comprises the steps that firstly, image data of an ultrasonic endoscope video or a static image in an ultrasonic endoscope monitor are obtained in real time through an image acquisition module, and an image frame is intercepted;
secondly, segmenting and extracting the tumor part image by adopting artificial image segmentation or a deep learning segmentation model based on the acquired image through an image segmentation module;
step three, unifying the size of the segmented images through an image conversion module, and carrying out normalization processing to obtain a modular image, namely a standardized focus position image; dividing the modularized picture into interstitial tumor images or leiomyoma images by using a deep learning classification model through an image classification module;
and step four, outputting the image classification result through an output module.
Further, in step three, the method for artificially and intelligently identifying interstitial tumor and leiomyoma under ultrasonic endoscope further comprises the following steps: constructing a deep learning classification model and a deep segmentation model;
the constructing of the deep learning classification model and the deep segmentation model comprises the following steps:
(1) obtaining qualified ultrasonic endoscope static images and image data which are screened from a historical ultrasonic endoscope database and are pathologically diagnosed as interstitial tumor or leiomyoma;
(2) extracting each frame of the obtained image data to obtain a static image, and constructing the static image and the static image directly obtained from the database to obtain a data set; dividing the images in the data set into qualified images and unqualified images based on whether the images in the data set are displayed clearly, whether gastrointestinal tract layers are clear and whether lesion origin layers are clear;
(3) selecting a tumor region in a qualified image for image segmentation, carrying out size unification treatment on the segmented image to obtain a modular image, and labeling disease types of the modular image according to a pathological type of a pathological diagnosis conclusion to be used as a modular image set;
(4) respectively randomly dividing static picture sets and modular picture sets of the interstitial tumor and the leiomyoma into a training set, a verification set and a test set by adopting a random number method;
(5) performing feature learning by using a fast-RNN, a YOLO network and other networks based on ultrasonic endoscope pictures in training set data to construct a deep learning segmentation model, screening the deep learning segmentation model by using a verification set, and verifying the diagnostic performance of the deep learning segmentation model by using test set data;
(6) performing feature learning on the training set modular picture data by using a ResNet, an inclusion network and other networks to construct a deep learning model, screening the deep learning classification model through verification set data, and finally verifying the diagnostic performance of the deep learning classification model through test set data;
(7) and setting parameters of the deep learning model.
Further, the distribution proportion of the training set, the verification set and the test set is 7: 1.5: 1.5.
further, the performing parameter setting of the deep learning model includes: setting a training initial learning rate, a learning rate callback parameter LearningMatSchedule, a training round Epoch, a training batch size, a verification batch size, an optimization function SGD and an output layer function Sigmoid in sequence.
Further, the method for identifying interstitial tumor and smooth myoma under ultrasonic endoscopy by artificial intelligence assistance further comprises the following steps: data enhancement was performed using batch normalization and 90 ° rotation, mirroring and blurring of the image and others.
Another object of the present invention is to provide an under-ultrasound-endoscope artificial intelligent identification system for interstitial tumors and smooth tumors, which implements the under-ultrasound-endoscope artificial intelligent identification method for interstitial tumors and smooth tumors, and the under-ultrasound-endoscope artificial intelligent identification system for interstitial tumors and smooth tumors includes:
the image acquisition module is used for acquiring image data of an ultrasonic endoscope video or a static image in an ultrasonic endoscope monitor in real time and intercepting an image frame;
the image segmentation module is used for segmenting and extracting the tumor part image by adopting artificial image segmentation or a deep learning segmentation model based on the acquired image;
the image conversion module is used for unifying the sizes of the divided images and carrying out normalization processing to obtain a modular picture, namely a standardized focus position image;
the image classification module is used for dividing the modularized pictures into interstitial tumor images or leiomyoma images by using a deep learning classification model;
and the output module is used for outputting the image classification result.
Another object of the present invention is to provide a computer apparatus, which includes a memory and a processor, the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the method for artificially and intelligently assisting in identifying interstitial and smooth fibroids under ultrasound endoscope.
Another object of the present invention is to provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the processor is enabled to execute the method for artificially and intelligently assisting in identifying interstitial tumors and leiomyoma under an ultrasonic endoscope.
The invention also aims to provide an information data processing terminal which is used for realizing the artificial intelligent auxiliary identification method of the interstitial tumor and the smooth tumor under the ultrasonic endoscope.
The invention also aims to provide the ultrasonic endoscope, which is provided with the artificial intelligent auxiliary identification system under the ultrasonic endoscope for the interstitial tumor and the leiomyoma, and implements the artificial intelligent auxiliary identification method under the ultrasonic endoscope for the interstitial tumor and the leiomyoma.
By combining all the technical schemes, the invention has the advantages and positive effects that: the invention provides an artificial intelligent auxiliary identification system, method, terminal and medium under an ultrasonic endoscope for identifying interstitial tumor and leiomyoma based on deep learning, which are used for identifying the ultrasonic endoscope picture of the interstitial tumor or leiomyoma and giving diagnosis prompt to an endoscope doctor, thereby obviously improving the image identification accuracy and reducing misdiagnosis.
The results of the endoultrasound image interpretation of 104 patients with interstitial tumors (50) and leiomyoma (54) in the pathologically diagnostically confirmed test set were: the diagnosis accuracy rate of 4 professional endoscopists is 76% (79/104), the sensitivity is 84% (42/50), and the specificity is 68.5% (37/54); the diagnosis accuracy of the invention is 96.2% (100/104), the sensitivity is 98% (49/50) and the specificity is 94.4% (51/54). The diagnosis accuracy of the invention is obviously higher than that of the endoscope doctor, and misdiagnosis cases can be obviously reduced. Meanwhile, the invention provides 1 diagnosis scheme which can avoid invasive endoscopic biopsy and is noninvasive before operation for patients with interstitial tumor or leiomyoma. In addition, the endoscope is simple to use and operate, and special training for endoscope doctors is not needed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
Fig. 1 is a flow chart of an artificial intelligent auxiliary identification method under an ultrasonic endoscope for interstitial tumor and smooth myoma provided by the embodiment of the invention.
Fig. 2 is a schematic structural diagram of an artificial intelligent auxiliary identification system under an ultrasonic endoscope for interstitial tumors and smooth fibroids provided by the embodiment of the invention.
FIG. 3 is a schematic diagram of the distribution of the test worker curves and the sensitivities and specificities of four professionals in a coordinate system for classifying the 104 cases of ultrasonic endoscopic image data in the test set according to the present invention;
in the figure: 1. an image acquisition module; 2. an image segmentation module; 3. an image conversion module; 4. an image classification module; 5. and an output module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to solve the problems in the prior art, the invention provides an ultrasonic endoscope, an artificial intelligence assisted identification method, a system, a terminal and a medium, and the invention is described in detail with reference to the accompanying drawings.
As shown in fig. 1, the method for artificially and intelligently identifying interstitial tumor and smooth tumor under ultrasonic endoscope provided by the embodiment of the present invention includes the following steps:
s101, acquiring image data of an ultrasonic endoscope video or a static image in an ultrasonic endoscope monitor in real time through an image acquisition module, and intercepting an image frame;
s102, carrying out segmentation and extraction on the tumor part image based on the acquired image by adopting artificial image segmentation or a deep learning segmentation model through an image segmentation module;
s103, unifying the size of the segmented images through an image conversion module, and performing normalization processing to obtain a standardized lesion site image; dividing the modularized picture into interstitial tumor images or leiomyoma images by using a deep learning classification model through an image classification module;
and S104, outputting the image classification result through an output module.
In step S103, the method for artificially and intelligently identifying interstitial tumor and smooth tumor under an ultrasound endoscope provided in the embodiment of the present invention further includes: constructing a deep learning classification model and a deep segmentation model;
the constructing of the deep learning classification model and the deep segmentation model comprises the following steps:
(1) obtaining qualified ultrasonic endoscope static images and image data which are screened from a historical ultrasonic endoscope database and are pathologically diagnosed as interstitial tumor or leiomyoma;
(2) extracting each frame of the obtained image data to obtain a static image, and constructing the static image and the static image directly obtained from the database to obtain a data set; dividing the images in the data set into qualified images and unqualified images based on whether the images in the data set are displayed clearly, whether gastrointestinal tract layers are clear and whether lesion origin layers are clear;
(3) selecting a tumor region in a qualified image for image segmentation, carrying out size unification treatment on the segmented image to obtain a modular image, and labeling disease types of the modular image according to a pathological type of a pathological diagnosis conclusion to be used as a modular image set;
(4) respectively randomly dividing static picture sets and modular picture sets of the interstitial tumor and the leiomyoma into a training set, a verification set and a test set by adopting a random number method;
(5) normalizing the data, performing feature learning by using a fast-RNN, a YOLO network and other networks based on ultrasonic endoscope pictures in training set data to construct a deep learning segmentation model, screening the deep learning segmentation model by using a verification set, and verifying the diagnostic performance of the deep learning segmentation model by using test set data;
(6) performing feature learning on the training set modular picture data by using a ResNet, an inclusion network and other networks to construct a deep learning model, screening the deep learning classification model through verification set data, and finally verifying the diagnostic performance of the deep learning classification model through test set data;
(7) and setting parameters of the deep learning model.
The distribution proportion of the training set, the verification set and the test set provided by the embodiment of the invention is 7: 1.5: 1.5.
the parameter setting for the deep learning model provided by the embodiment of the invention comprises the following steps: setting a training initial learning rate, a learning rate callback parameter LearningMatSchedule, a training round Epoch, a training batch size, a verification batch size, an optimization function SGD and an output layer function Sigmoid in sequence.
The method for artificially and intelligently identifying interstitial tumor and leiomyoma under the ultrasonic endoscope provided by the embodiment of the invention further comprises the following steps: data enhancement was performed using batch normalization and 90 ° rotation, mirroring and blurring of the image and others.
As shown in fig. 2, the system for artificially and intelligently identifying interstitial tumor and smooth tumor under ultrasonic endoscope provided by the embodiment of the present invention includes:
the system comprises an image acquisition module 1, a video acquisition module and a video acquisition module, wherein the image acquisition module is used for acquiring image data of an ultrasonic endoscope video or a static image in an ultrasonic endoscope monitor in real time and intercepting an image frame;
the image segmentation module 2 is used for segmenting and extracting the tumor part image by adopting artificial image segmentation or a deep learning segmentation model based on the acquired image;
the image conversion module 3 is used for unifying the sizes of the divided images and carrying out normalization processing to obtain a modular picture, namely a standardized focus position image;
the image classification module 4 is used for dividing the modularized pictures into interstitial tumor images or leiomyoma images by using a deep learning classification model;
and the output module 5 is used for outputting the image classification result.
The technical effects of the present invention will be further described with reference to specific embodiments.
The invention provides an artificial intelligent auxiliary diagnosis system under an ultrasonic endoscope for distinguishing interstitial tumor and smooth myoma based on deep learning, which specifically comprises the following steps:
the image acquisition module is used for acquiring images of an ultrasonic endoscope video or a static image in an ultrasonic endoscope monitor in real time and intercepting image frames;
the image input module is connected with the image acquisition module, can select manual image segmentation or select to input image frames into a deep learning segmentation model, segment and extract tumor parts, and unify segmented images into modular images with consistent sizes to be output;
the image classification module is connected with the image input module, inputs the modularized images into the deep learning classification model and takes the classification of interstitial tumors or leiomyoma as output;
and the conclusion display module is connected with the image classification module and outputs a classification result according to the deep learning classification model to display a diagnosis conclusion of the auxiliary diagnosis system.
The modular picture is a standardized focus picture of a static picture of the ultrasonic endoscope after image segmentation and picture size unification.
The deep learning segmentation model takes a static picture of the ultrasonic endoscope as input, is used for segmenting a picture of a focus part in an ultrasonic endoscope image, and takes the segmented focus part picture as output.
The deep learning classification model takes the modular picture as input and is used for classifying whether the focus in the modular picture is interstitial tumor or leiomyoma, and the classification result is output.
Specifically, the deep learning segmentation model is a convolutional neural network based on fast-RNN and YOLO, the deep learning classification model is a convolutional neural network such as ResNet and increment, and the deep learning classification model is developed by using Python language;
specifically, the development of the deep learning segmentation model and the deep learning classification model comprises the following steps:
1. qualified ultrasonic endoscope static pictures and videos which are pathologically diagnosed as interstitial tumors or leiomyoma can be screened and obtained from the ultrasonic endoscope database in the past year.
2. And extracting each frame of the acquired video to acquire a static picture, and taking the static picture and the static picture directly acquired from the database as a data set together. And manually classifying the static picture II into a qualified picture and an unqualified picture according to whether the pictures in the data set are displayed clearly, whether the gastrointestinal tract level is clear and whether the lesion origin level is clear. And (3) manually selecting a tumor area in the qualified static picture by using Labelme marking software to carry out picture segmentation and labeling according to the pathological type of the pathological diagnosis conclusion to be used as an ultrasonic endoscope picture set. And unifying the sizes of the divided pictures, carrying out normalization processing to obtain a modular picture, and marking the disease types of the modular picture according to the pathological diagnosis conclusion to be used as a modular picture set.
3. By adopting a random number method, the static picture set and the modular picture set of the interstitial tumor and the smooth myoma are respectively and randomly divided into a training set, a verification set and a test set, and the distribution proportion is 7: 1.5: 1.5, respectively developing a deep learning segmentation model and a deep learning classification model.
4. Developed using Python language (version 3.7.0), the convolutional neural network was programmed using a PyTorch library. The deep learning segmentation model uses networks such as fast-RNN and YOLO, the model is constructed through feature learning of ultrasonic endoscope pictures in verification set data, the deep learning segmentation model is screened through the verification set, and finally the diagnosis performance of the deep learning segmentation model is verified through test set data. The deep learning classification model uses networks such as ResNet and inclusion to perform feature learning on training set modular picture data to construct a deep learning model, the deep learning classification model is screened through verification set data, and finally the diagnostic performance of the deep learning classification model is verified through test set data.
5. Deep learning model parameter setting: setting a training initial learning rate, a learning rate callback parameter LearningMatSchedule, a training round Epoch, a training batch size, a verification batch size, an optimization function SGD and an output layer function Sigmoid in sequence.
6. Batch normalization and data enhancement methods such as 90-degree rotation, mirror image and blurring of images are adopted to reduce the overfitting effect.
The deep learning-based ultrasonic endoscopic artificial intelligent auxiliary diagnosis method for distinguishing the interstitial tumor and the smooth tumor comprises the following steps:
s1 assists the diagnostic system in extracting an endoultrasound picture in an endoultrasound monitor.
S2 the auxiliary diagnosis system carries out image segmentation and image size unification according to the tumor position in the ultrasonic endoscope image to obtain a modularized image.
S3 the auxiliary diagnosis system takes the modularized picture as input and takes the diagnosis of the interstitial tumor or the leiomyoma as output, and gives the auxiliary diagnosis conclusion of the system.
The pathological diagnosis conclusion after the operation is taken as a gold standard, the area under the curve (AUC) of a tested worker, the diagnosis accuracy, the sensitivity and the specificity are taken as evaluation indexes, the auxiliary diagnosis system is used for carrying out differential classification on the data of the test set, and the data are compared with the diagnosis results of 4 professional endoscopy doctors (the ultrasonic endoscopy interpretation experience is more than 8 years) so as to evaluate the diagnosis efficiency of the auxiliary diagnosis system.
The result shows that the AUC of the artificial intelligent auxiliary diagnosis system under the ultrasonic endoscope for the interstitial tumor and the smooth myoma is 0.986, the diagnosis accuracy is 96.2 percent (100/104), the sensitivity is 98 percent (49/50) and the specificity is 94.4 percent (51/54). The diagnosis accuracy of 4 professional endoscopists as controls was 76% (79/104), the sensitivity was 84% (42/50), and the specificity was 68.5% (37/54). The diagnosis accuracy of the product is obviously higher than that of the endoscopist.
Please refer to fig. 3. Fig. 3 shows the distribution of sensitivity and specificity in a coordinate system of a test worker curve and four specialized physicians for classifying 104 cases of endoscopic ultrasound image data in a test set according to the present invention.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An artificial intelligent auxiliary identification method under an ultrasonic endoscope for interstitial tumor and smooth tumor is characterized by comprising the following steps:
acquiring image data of an ultrasonic endoscope video or a static image in an ultrasonic endoscope monitor in real time through an image acquisition module, and intercepting an image frame;
the image segmentation module is used for segmenting and extracting the tumor part image by adopting artificial image segmentation or utilizing a deep learning segmentation model based on the acquired image;
unifying the size of the segmented images through an image conversion module, and carrying out normalization processing to obtain a modular image, namely a standardized focus position image;
dividing the modularized picture into interstitial tumor images or leiomyoma images by using a deep learning classification model through an image classification module;
and outputting the image classification result through an output module.
2. The under-ultrasound-endoscope artificial-intelligence-aided identification method for interstitial tumors and leiomyoma according to claim 1, further comprising: constructing a deep learning classification model and a deep segmentation model;
the constructing of the deep learning classification model and the deep segmentation model comprises the following steps:
(1) obtaining qualified ultrasonic endoscope static images and image data which are screened from a historical ultrasonic endoscope database and are pathologically diagnosed as interstitial tumor or leiomyoma;
(2) extracting each frame of the obtained image data to obtain a static image, and constructing the static image and the static image directly obtained from the database to obtain a data set; dividing the images in the data set into qualified images and unqualified images based on whether the images in the data set are displayed clearly, whether gastrointestinal tract layers are clear and whether lesion origin layers are clear;
(3) selecting a tumor area in a qualified image for image segmentation, carrying out size normalization processing on the segmented image and normalization processing according to an image pixel average value and a standard deviation to obtain a modular image, and marking disease types of the modular image according to a pathological type of a pathological diagnosis conclusion to serve as a modular image set;
(4) respectively randomly dividing static picture sets and modular picture sets of the interstitial tumor and the leiomyoma into a training set, a verification set and a test set by adopting a random number method;
(5) performing feature learning by using a fast-RNN, a YOLO network and other networks based on ultrasonic endoscope pictures in training set data to construct a deep learning segmentation model, screening the deep learning segmentation model by using a verification set, and verifying the diagnostic performance of the deep learning segmentation model by using test set data;
(6) and performing feature learning on the training set modular picture data by using a ResNet, an inclusion network and other networks to construct a deep learning model, screening the deep learning classification model through the verification set data, and finally verifying the diagnostic performance of the deep learning classification model through the test set data.
3. The method for artificially and intelligently identifying interstitial tumors and leiomyoma under an ultrasonic endoscope according to claim 2, wherein the distribution proportion of the training set, the verification set and the test set is 7: 1.5: 1.5.
4. the endoultrasound artificial intelligent auxiliary identification method for interstitial tumors and leiomyoma as claimed in claim 2, wherein the parameter setting for the deep learning model comprises: setting training initial learning rate, learning rate callback parameter LearningMatSchedule, training round Epoch, training batch size, verifying batch size, optimizing function SGD and output layer function Sigmoid in sequence.
5. The under-ultrasound-endoscope artificial-intelligence-aided identification method for interstitial tumors and leiomyoma according to claim 1, further comprising: data enhancement was performed using Batch Normalization and image 90 ° rotation, mirroring and blurring, among others.
6. The utility model provides an artifical intelligent assistance identification system under ultrasonic endoscope of interstitial tumour and smooth myoma which characterized in that, artifical intelligent assistance identification system under ultrasonic endoscope of interstitial tumour and smooth myoma includes:
the image acquisition module is used for acquiring image data of an ultrasonic endoscope video or a static image in an ultrasonic endoscope monitor in real time and intercepting an image frame;
the image segmentation module is used for segmenting and extracting the tumor part image by adopting artificial image segmentation or a deep learning segmentation model based on the acquired image;
the image conversion module is used for unifying the sizes of the divided images and carrying out normalization processing to obtain a modular picture, namely a standardized focus position image;
the image classification module is used for dividing the modularized pictures into interstitial tumor images or leiomyoma images by using a deep learning classification model;
and the output module is used for outputting the image classification result.
7. A computer device comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to perform the method for assisted artificial intelligence based sub-ultrasound identification of interstitial and smooth fibroids according to any one of claims 1 to 5.
8. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the method for endoultrasound assisted intelligent assisted identification of interstitial and smooth fibroids according to any one of claims 1 to 5.
9. An information data processing terminal, characterized in that the information data processing terminal is used for realizing the artificial intelligent auxiliary identification method for interstitial tumor and leiomyoma under ultrasonic endoscope of any one of claims 1-5.
10. An ultrasonic endoscope, which is characterized by carrying the system for artificially and intelligently identifying interstitial tumor and leiomyoma under the ultrasonic endoscope of claim 6 and implementing the method for artificially and intelligently identifying interstitial tumor and leiomyoma under the ultrasonic endoscope of any one of claims 1-5.
CN202110061525.6A 2021-01-18 2021-01-18 Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium Pending CN112801958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110061525.6A CN112801958A (en) 2021-01-18 2021-01-18 Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110061525.6A CN112801958A (en) 2021-01-18 2021-01-18 Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium

Publications (1)

Publication Number Publication Date
CN112801958A true CN112801958A (en) 2021-05-14

Family

ID=75810029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110061525.6A Pending CN112801958A (en) 2021-01-18 2021-01-18 Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium

Country Status (1)

Country Link
CN (1) CN112801958A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113270171A (en) * 2021-06-18 2021-08-17 上海市第一人民医院 Pregnancy B-ultrasonic detection auxiliary method based on artificial intelligence
CN113539438A (en) * 2021-07-07 2021-10-22 众享健康医疗科技(浙江)有限公司 Artificial intelligence auxiliary system for screening fetal congenital heart disease
CN113610847A (en) * 2021-10-08 2021-11-05 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode
CN114587416A (en) * 2022-03-10 2022-06-07 山东大学齐鲁医院 Gastrointestinal tract submucosal tumor diagnosis system based on deep learning multi-target detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369151A (en) * 2017-06-07 2017-11-21 万香波 System and method are supported in GISTs pathological diagnosis based on big data deep learning
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
CN109166105A (en) * 2018-08-01 2019-01-08 中国人民解放军南京军区南京总医院 The malignancy of tumor risk stratification assistant diagnosis system of artificial intelligence medical image
CN109584218A (en) * 2018-11-15 2019-04-05 首都医科大学附属北京友谊医院 A kind of construction method of gastric cancer image recognition model and its application
CN111317430A (en) * 2020-02-28 2020-06-23 青岛大学附属医院 Automatic assessment device for digestive endoscopy inspection quality of children and adults based on artificial intelligence
CN111798425A (en) * 2020-06-30 2020-10-20 天津大学 Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369151A (en) * 2017-06-07 2017-11-21 万香波 System and method are supported in GISTs pathological diagnosis based on big data deep learning
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
CN109166105A (en) * 2018-08-01 2019-01-08 中国人民解放军南京军区南京总医院 The malignancy of tumor risk stratification assistant diagnosis system of artificial intelligence medical image
CN109584218A (en) * 2018-11-15 2019-04-05 首都医科大学附属北京友谊医院 A kind of construction method of gastric cancer image recognition model and its application
CN111317430A (en) * 2020-02-28 2020-06-23 青岛大学附属医院 Automatic assessment device for digestive endoscopy inspection quality of children and adults based on artificial intelligence
CN111798425A (en) * 2020-06-30 2020-10-20 天津大学 Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱建伟: "计算机辅助诊断技术在胃肠道黏膜下病变诊断中的应用", 《中国优秀硕博士学位论文全文数据库(硕士) 医药卫生科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113270171A (en) * 2021-06-18 2021-08-17 上海市第一人民医院 Pregnancy B-ultrasonic detection auxiliary method based on artificial intelligence
CN113539438A (en) * 2021-07-07 2021-10-22 众享健康医疗科技(浙江)有限公司 Artificial intelligence auxiliary system for screening fetal congenital heart disease
CN113610847A (en) * 2021-10-08 2021-11-05 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode
CN114587416A (en) * 2022-03-10 2022-06-07 山东大学齐鲁医院 Gastrointestinal tract submucosal tumor diagnosis system based on deep learning multi-target detection

Similar Documents

Publication Publication Date Title
Du et al. Review on the applications of deep learning in the analysis of gastrointestinal endoscopy images
CN112801958A (en) Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium
CN109670510B (en) Deep learning-based gastroscope biopsy pathological data screening system
CN110600122B (en) Digestive tract image processing method and device and medical system
US20180263568A1 (en) Systems and Methods for Clinical Image Classification
WO2021093448A1 (en) Image processing method and apparatus, server, medical image processing device and storage medium
US20220172828A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
CN109544526B (en) Image recognition system, device and method for chronic atrophic gastritis
CN109523535B (en) Pretreatment method of lesion image
WO2015035229A2 (en) Apparatuses and methods for mobile imaging and analysis
Cho et al. Comparison of convolutional neural network models for determination of vocal fold normality in laryngoscopic images
CN109411084A (en) A kind of intestinal tuberculosis assistant diagnosis system and method based on deep learning
CN111144271B (en) Method and system for automatically identifying biopsy parts and biopsy quantity under endoscope
CN109460717A (en) Alimentary canal Laser scanning confocal microscope lesion image-recognizing method and device
US20230206435A1 (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate
Dmitry et al. Review of features and metafeatures allowing recognition of abnormalities in the images of GIT
CN110974179A (en) Auxiliary diagnosis system for stomach precancer under electronic staining endoscope based on deep learning
Xu et al. Upper gastrointestinal anatomy detection with multi‐task convolutional neural networks
Wang et al. Localizing and identifying intestinal metaplasia based on deep learning in oesophagoscope
KR102095730B1 (en) Method for detecting lesion of large intestine disease based on deep learning
CN111754530A (en) Prostate ultrasonic image segmentation and classification method
Ham et al. Improvement of gastroscopy classification performance through image augmentation using a gradient-weighted class activation map
CN116797889B (en) Updating method and device of medical image recognition model and computer equipment
CN116563216B (en) Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition
Garcia-Peraza-Herrera et al. Interpretable fully convolutional classification of intrapapillary capillary loops for real-time detection of early squamous neoplasia

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210514

RJ01 Rejection of invention patent application after publication