CN114464289A - ERCP report generation method and device, electronic equipment and computer readable storage medium - Google Patents
ERCP report generation method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN114464289A CN114464289A CN202210127179.1A CN202210127179A CN114464289A CN 114464289 A CN114464289 A CN 114464289A CN 202210127179 A CN202210127179 A CN 202210127179A CN 114464289 A CN114464289 A CN 114464289A
- Authority
- CN
- China
- Prior art keywords
- video
- white light
- ray
- module
- intubation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007459 endoscopic retrograde cholangiopancreatography Methods 0.000 title claims abstract description 73
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000002627 tracheal intubation Methods 0.000 claims description 87
- 210000000013 bile duct Anatomy 0.000 claims description 40
- 238000002156 mixing Methods 0.000 claims description 24
- 210000002445 nipple Anatomy 0.000 claims description 22
- 210000000277 pancreatic duct Anatomy 0.000 claims description 19
- 239000004575 stone Substances 0.000 claims description 18
- 230000011218 segmentation Effects 0.000 claims description 15
- 206010013554 Diverticulum Diseases 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 210000000941 bile Anatomy 0.000 claims description 6
- 238000003062 neural network model Methods 0.000 abstract description 5
- 238000012549 training Methods 0.000 description 25
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 210000003459 common hepatic duct Anatomy 0.000 description 4
- 230000002183 duodenal effect Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 210000001096 cystic duct Anatomy 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000002603 extrahepatic bile duct Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 208000016222 Pancreatic disease Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000003228 intrahepatic bile duct Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 208000024691 pancreas disease Diseases 0.000 description 1
- 210000005077 saccule Anatomy 0.000 description 1
- 238000007464 sphincterotomy Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
- A61B5/0084—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/42—Detecting, measuring or recording for evaluating the gastrointestinal, the endocrine or the exocrine systems
- A61B5/4222—Evaluating particular parts, e.g. particular organs
- A61B5/425—Evaluating particular parts, e.g. particular organs pancreas
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Artificial Intelligence (AREA)
- Veterinary Medicine (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Physiology (AREA)
- Software Systems (AREA)
- Optics & Photonics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- High Energy & Nuclear Physics (AREA)
- Evolutionary Biology (AREA)
- Signal Processing (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Dentistry (AREA)
- Fuzzy Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Psychiatry (AREA)
- Bioinformatics & Computational Biology (AREA)
- Quality & Reliability (AREA)
Abstract
The embodiment of the application provides a method, a device, electronic equipment and a computer readable storage medium for generating an ERCP report, the method comprises the steps of firstly obtaining a medical video to be processed, wherein the medical video to be processed comprises a white light video and an X-ray video, then calling a corresponding trained image and video recognition model according to the type of the medical video to be processed, wherein the image and video recognition model comprises a white light recognition model or an X-ray recognition model, then recognizing the white light video based on the trained white light recognition model to obtain a first recognition result, recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result, and finally generating the ERCP report according to the first recognition result and the second recognition result. The medical video to be processed is processed through the neural network model, the report is automatically generated, manual film reading and report writing are not needed, and the examination efficiency of the endoscope is improved.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to an ERCP report generation method, apparatus, electronic device, and computer-readable storage medium.
Background
With the development of endoscopic technology, Endoscopic Retrograde Cholangiopancreatography (ERCP) has become one of the major methods for diagnosing and treating biliary pancreatic diseases. A standardized, accurate, comprehensive ERCP report is of vital importance and provides important information to clinicians, endoscopists, and patients.
In the traditional diagnosis and treatment process based on the ERCP report, after the ERCP report is usually completed by a doctor, the ERCP report is recorded according to medical images obtained in the operation process, the manual examination and film reading mode consumes much time, the working efficiency is low, and the accuracy of the finally obtained ERCP report result is not high due to the time difference between the written report and the operation and the difference of the working experience and working habit of an endoscope doctor.
Content of application
The embodiment of the application provides an ERCP report generation method, an ERCP report generation device, electronic equipment and a computer-readable storage medium, so that the ERCP report accuracy is guaranteed, and the endoscope inspection efficiency is improved.
In one aspect, the present application provides an ERCP report generating method, including:
acquiring a medical video to be processed, wherein the medical video to be processed comprises a white light video and an X-ray video;
calling corresponding trained image and video recognition models according to the type of the medical video to be processed, wherein the image and video recognition models comprise white light recognition models or X-ray recognition models;
recognizing the white light video based on the trained white light recognition model to obtain a first recognition result, and recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result;
and generating an ERCP report according to the first identification result and the second identification result.
Optionally, in some possible implementations of the present application, the white light recognition model includes a diverticulum nipple classification module, a nipple form classification module, a first mixing module, and a second mixing module, and the step of recognizing the white light video based on the trained white light recognition model to obtain a first recognition result includes:
the diverticulum nipple classifying module identifies white light images in the white light video to obtain diverticulum nipple classifying results in the white light video, the nipple form classifying module identifies the white light images in the white light video to obtain nipple form classifying results in the white light video, the first mixing module identifies the white light video to obtain first intubation tube results corresponding to the white light video and intubation tube tool types in the white light video, and the second mixing module identifies the white light images in the white light video to obtain stone taking tool types and stone identifying results in the white light video.
Optionally, in some possible implementations of the present application, the first mixing module includes a pre-cutting recognition module, an intubation attempt number and time recognition module, and an intubation tool recognition module, and the step of recognizing the white light video through the first mixing recognition module to obtain a first intubation result corresponding to the white light video and an intubation tool category in the white light video includes:
identifying the white light image in the white light video through the pre-cutting identification module to obtain a pre-cutting identification result;
identifying the white light video through the intubation attempt times and time identification module to obtain an intubation image and a first intubation result;
and identifying the cannula image through the cannula tool identification module to obtain the category of the cannula tool.
Optionally, in some possible implementations of the present application, the X-ray recognition model includes a pancreatic duct number recognition module, a third mixing module, and a bile-pancreatic duct stent recognition module, and the step of recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result includes:
the X-ray video is identified through the pancreatic duct frequency identification module to obtain a corresponding second intubation result in the X-ray video, the X-ray image in the X-ray video is segmented and identified through the third mixed identification module to obtain a bile duct part identification result in the X-ray video, and the X-ray image in the X-ray video is identified through the bile pancreatic duct stent identification module to obtain a bile pancreatic duct stent identification result in the X-ray video.
Optionally, in some possible implementations of the present application, the third blending module includes a bile duct and endoscope segmentation module and a bile duct site identification module; the step of obtaining the bile duct part identification result in the X-ray video by segmenting and identifying the X-ray image in the X-ray video through the third hybrid identification module comprises the following steps:
segmenting the X-ray image in the X-ray video through the bile duct and endoscope segmentation module to obtain a bile duct diameter segmentation result;
and identifying the bile duct diameter segmentation result through the bile duct part identification module to obtain a bile duct part identification result.
Optionally, in some possible implementations of the present application, the step of identifying the white light video by the intubation attempt number and time identification module to obtain an intubation image and a first intubation result includes:
identifying the white light video based on the intubation attempt times and the intubation time identification module to obtain intubation attempt times, intubation time and intubation images in the white light video;
determining a first intubation result according to the number of intubation attempts and the intubation time.
Optionally, in some possible implementations of the present application, the step of identifying the white light video based on the number of intubation attempts and the time identification module to obtain intubation time in the white light video includes:
identifying a white light image in the white light video based on the intubation attempt times and the time identification module to obtain first time corresponding to an intubation starting image and second time corresponding to an intubation success image in the white light video;
and obtaining the intubation time according to the first time and the second time.
In one aspect, the present application provides an ERCP report generating apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring medical videos to be processed, and the medical videos to be processed comprise white light videos and X-ray videos;
the calling module is used for calling corresponding trained image recognition and video models according to the type of the medical video to be processed, and the image and video recognition models comprise white light recognition models or X-ray recognition models;
the recognition module is used for recognizing the white light video based on the trained white light recognition model to obtain a first recognition result, and recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result;
and the generating module is used for generating an ERCP report according to the first identification result and the second identification result.
In one aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps in the ERCP report generation method when executing the program.
In one aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the ERCP report generating method described above.
The embodiment of the application provides a method, a device, electronic equipment and a computer readable storage medium for generating an ERCP report, the method comprises the steps of firstly obtaining a medical video to be processed, wherein the medical video to be processed comprises a white light video and an X-ray video, then calling a corresponding trained image and video recognition model according to the type of the medical video to be processed, wherein the image and video recognition model comprises a white light recognition model or an X-ray recognition model, then recognizing the white light video based on the trained white light recognition model to obtain a first recognition result, recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result, and finally generating the ERCP report according to the first recognition result and the second recognition result. According to the method and the device, the medical video to be processed is processed through the neural network model, the report is automatically generated, manual film reading and report writing are not needed, and the problem that the result accuracy of the ERCP report is low due to the time difference between the report writing and the checking operation and the difference between the working experience and working habit of an endoscope doctor is solved. Therefore, the ERCP report generation method ensures the accuracy of the ERCP report and improves the inspection efficiency of the endoscope.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an ERCP generation system according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating an ERCP report generation method according to an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a white light identification module according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an X-ray identification module according to an embodiment of the present application.
Fig. 5 is a schematic diagram of an ERCP report provided in an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an ERCP report generation apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, as appearing in the specification, claims and drawings of this application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or may alternatively include other steps or elements inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
The embodiment of the application provides an ERCP report generation method, an ERCP report generation device, electronic equipment and a computer-readable storage medium. The ERCP report generating apparatus may be integrated in an electronic device, which may be a server or a terminal, where the terminal may include a tablet Computer, a notebook Computer, a Personal Computer (PC), a microprocessor, or other devices.
In the present application, the ERCP report is an examination report generated in endoscopic retrograde cholangiopancreatography examination, and the content of the ERCP report includes whether deep cholangiotomy was successfully achieved, and instruments (such as a sphincterotome, a balloon catheter and the like) used during intubation, techniques used during operation, specific tools used, expected results and the like.
In the present application, the image and video recognition model is a neural network model trained on a large number of sample numbers.
In the present application, the video capture device is a video capture device that can be used for ERCP.
A server and a computer-readable storage medium. The image processing apparatus may be integrated in a server, and the server may be a terminal or the like.
Referring to fig. 1, fig. 1 is a schematic view of a scene of an ERCP generating system according to an embodiment of the present application, which is illustrated by taking an example that an ERCP report generating device is integrated in a server 11, where the system may include a database 13, a server 11 and a video capturing device 12, and data interaction is performed between the database 13 and the server 11, and between the server 11 and the video capturing device 12 through a wireless network or a wired network, where:
the database 13 may be a local database and/or a remote database, etc.
The server 11 includes, but is not limited to, a tablet Computer, a notebook Computer, a Personal Computer (PC), a micro processing box, or other devices, and may be a local server and/or a remote server, and the like.
The server 11 obtains a medical video to be processed from the video acquisition device 12 or the database 13, obtains the medical video to be processed, and obtains the medical video to be processed, where the medical video to be processed includes a white light video and an X-ray video, calls a corresponding trained image and video recognition model according to the type of the medical video to be processed, where the image and video recognition model includes a white light recognition model or an X-ray recognition model, recognizes the white light video based on the trained white light recognition model to obtain a first recognition result, recognizes the X-ray video based on the trained X-ray recognition model to obtain a second recognition result, and finally generates an ERCP report according to the first recognition result and the second recognition result. The medical video to be processed is processed through the neural network model, the report is automatically generated, manual film reading and report writing are not needed, and the examination efficiency of the endoscope is improved.
It should be noted that the scenario diagram of the ERCP report generating system shown in fig. 1 is only an example, the database, the server and the video capture device described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as can be known by those skilled in the art, with the evolution of the system and the emergence of a new business office, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 2, fig. 2 is a schematic flowchart of an ERCP report generating method provided in the embodiment of the present application, and is applied to the server, where the ERCP report generating method includes:
step 201: and acquiring medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos.
The server acquires a medical video to be processed from the video acquisition equipment or the database, wherein the medical video is generated in the process of endoscopic retrograde cholangiopancreatography examination. The medical video to be processed comprises white light video and X-ray video.
Step 202: and calling corresponding trained image and video recognition models according to the types of the medical videos to be processed, wherein the image and video recognition models comprise white light recognition models or X-ray recognition models.
The medical video to be processed comprises a white light video and an X-ray video, so that the white light video calls a white light identification model to identify, and the X-ray video calls the X-ray identification model to identify.
Step 203: and identifying the white light video based on the trained white light identification model to obtain a first identification result, and identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result.
In the embodiment of the application, the white light recognition model can use a large amount of white light images and video data as training samples of the deep neural network, obtain the labeled recognized white light images and white light videos at the same time, input the training samples into the white light recognition model to obtain the predicted white light image and white light video recognition results, and perform parameter training on the white light images and the white light videos according to the predicted white light image and white light video recognition results, the labeled recognized white light images and white light videos, the loss function and the loss target to obtain the final white light recognition model.
In the embodiment of the application, the X-ray recognition model may use a large amount of X-ray images and video data as training samples of the deep neural network, obtain the marked recognized X-ray images and X-ray videos at the same time, input the training samples into the X-ray recognition model to obtain predicted X-ray image and X-ray video recognition results, and perform parameter training on the X-ray images and the X-ray videos according to the predicted X-ray image and X-ray video recognition results, the marked recognized X-ray images and X-ray videos, the loss function, and the loss target to obtain a final X-ray recognition model.
In one embodiment, the white light recognition model includes a diverticulum nipple classification module, a nipple form classification module, a first mixing module, and a second mixing module, and the step of recognizing the white light video based on the trained white light recognition model to obtain a first recognition result includes: the diverticulum nipple classifying module identifies white light images in the white light video to obtain diverticulum nipple classifying results in the white light video, the nipple form classifying module identifies the white light images in the white light video to obtain nipple form classifying results in the white light video, the first mixing module identifies the white light video to obtain first intubation tube results corresponding to the white light video and intubation tube tool types in the white light video, and the second mixing module identifies the white light images in the white light video to obtain stone taking tool types and stone identifying results in the white light video.
The images of the duodenal papilla are selected from the white light video images, the images are selected and classified by a professional doctor, and the images are divided into diverticulum-free papilla, diverticulum-side papilla and diverticulum-side papilla. And taking the classified images as training samples, and performing parameter training on the diverticulum nipple classification module by using the classified images to finally obtain the trained diverticulum nipple classification module.
The image of the duodenal papilla is selected from the white light video image, and the professional doctor classifies the duodenal papilla into normal papilla and abnormal papilla and papilla. And taking the classified images as training samples, and performing parameter training on the nipple form classification module by using the classified images to finally obtain the trained nipple form classification module.
In one embodiment, the first mixing module includes a pre-cutting identification module, an intubation attempt number and time identification module, and an intubation tool identification module, and the step of identifying the white light video by the first mixing identification module to obtain a first intubation result corresponding to the white light video and an intubation tool category in the white light video includes: identifying the white light image in the white light video through the pre-cutting identification module to obtain a pre-cutting identification result; identifying the white light video through the intubation attempt times and time identification module to obtain an intubation image and a first intubation result; and identifying the cannula image through the cannula tool identification module to obtain the category of the cannula tool.
Selecting a pre-cut image by a professional doctor, and performing parameter training on the pre-cut recognition module by using the marked pre-cut image to finally obtain a trained pre-cut recognition module; selecting white intubation videos as training samples by a professional doctor, marking an initial intubation image and an intubation success image of each video in the samples, marking the corresponding intubation trial times of each sample video, and performing parameter training by using an LSTM (local start timing metric) to obtain an intubation trial time identification module; and (3) selecting an image containing a tool used for intubation, such as a cutting knife, a double-guide wire and the like, by a professional doctor, marking the tool, and performing parameter training by using the marked image as a training sample of the intubation tool identification module to finally obtain the trained intubation tool identification module.
In one embodiment, the second mixing module includes a stone extracting tool identification module and a stone identification module, and the step of identifying the white-light image in the white-light video by the second mixing module to obtain the stone extracting tool type and the stone identification result in the white-light video includes: identifying the white light image in the white light video through the stone removing tool identification module to obtain a stone removing tool image; and identifying the white light image through the calculus identification module to obtain a calculus image. Fig. 3 is a schematic structural diagram of a white light identification module according to an embodiment of the present application.
The image containing the calculus removing tool is selected by a professional doctor to serve as a training sample for training a calculus removing tool identification model, and the image comprises sphincterotomy, a basket, a saccule and the like. Performing parameter training on the stone taking tool identification module by using the marked sample to finally obtain a trained stone taking tool identification module; and selecting a picture containing the taken stone by a professional doctor as a sample of the stone recognition model, and training the stone recognition module through the marked sample to obtain the trained stone recognition module.
In one embodiment, the X-ray recognition model includes a pancreatic duct number recognition module, a third hybrid module, and a biliary pancreatic duct stent recognition module, and the step of recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result includes: the X-ray video is identified through the pancreatic duct frequency identification module to obtain a corresponding second intubation result in the X-ray video, the X-ray image in the X-ray video is segmented and identified through the third mixed identification module to obtain a bile duct part identification result in the X-ray video, and the X-ray image in the X-ray video is identified through the bile pancreatic duct stent identification module to obtain a bile pancreatic duct stent identification result in the X-ray video.
Selecting X-ray intubation videos as training samples by a professional doctor, marking the number of times of pancreatic duct entrance corresponding to each sample video, and performing parameter training by using an LSTM (local Strand TM) to obtain a trained pancreatic duct number identification module; the X-ray image is divided into three types of bile duct stent placement, pancreatic duct stent placement and non-stent placement by a professional doctor, and is marked, and the marked image is used as a training sample to perform parameter training on a bile duct and pancreatic duct stent recognition module.
In one embodiment, the third blending module includes a bile duct and endoscope segmentation module and a bile duct site identification module; the step of obtaining the bile duct part identification result in the X-ray video by segmenting and identifying the X-ray image in the X-ray video through the third hybrid identification module comprises the following steps: segmenting the X-ray image in the X-ray video through the bile duct and endoscope segmentation module to obtain a bile duct diameter segmentation result; and identifying the bile duct diameter segmentation result through the bile duct part identification module to obtain a bile duct part identification result. Fig. 4 is a schematic structural diagram of an X-ray identification module according to an embodiment of the present application.
The method comprises the steps that a professional doctor marks images of a bile duct, outlines the bile duct and an endoscope in the images, takes the marked images of the bile duct as training samples of a bile duct and endoscope segmentation module, carries out parameter training, and the trained bile duct and endoscope segmentation module can automatically calculate the diameter of each part of the bile duct according to the diameter of the endoscope and divide the bile duct into normal, narrow and expanded parts according to the diameter of the bile duct. The trained bile duct part identification module obtains a bile duct part identification result based on a bile duct segmentation result, wherein an intrahepatic bile duct is arranged above a confluence part of a left hepatic duct and a right hepatic duct, and an extrahepatic bile duct is arranged below the confluence part; the extrahepatic bile duct is divided into an upper section, a middle section and a lower section, the upper section is above the junction of the cystic duct and the common hepatic duct, the middle section is from the junction of the cystic duct and the common hepatic duct to the junction of the common biliary duct and the pancreatic duct, and the lower section is below the junction of the common biliary duct and the pancreatic duct.
In one embodiment, the step of identifying the white light video by the intubation attempt number and time identification module to obtain an intubation image and a first intubation result includes: identifying the white light video based on the intubation attempt times and the intubation time identification module to obtain intubation attempt times, intubation time and intubation images in the white light video; determining a first intubation result according to the number of intubation attempts and the intubation time.
In one embodiment, the step of identifying the white light video based on the intubation attempt number and the time identification module to obtain intubation time in the white light video includes: identifying a white light image in the white light video based on the intubation attempt times and the time identification module to obtain first time corresponding to an intubation starting image and second time corresponding to an intubation success image in the white light video; and obtaining the intubation time according to the first time and the second time.
Step 204: and generating an ERCP report according to the first recognition result and the second recognition result.
And converting the first recognition result and the second recognition result into corresponding characters and keeping corresponding pictures, and finally automatically generating a report. Fig. 5 is a schematic diagram of an ERCP report provided in the embodiment of the present application, where the ERCP report includes images corresponding to important steps in an ERCP operation process and descriptions of the corresponding images.
The embodiment of the application provides an ERCP report generation method, which is used for processing medical videos to be processed through a neural network model, automatically generating reports without manual film reading and report writing, and avoiding the problem of low ERCP report result accuracy caused by time difference between report writing and checking operation and difference of working experience and working habits of an endoscope physician. Therefore, the ERCP report generation method ensures the accuracy of the ERCP report and improves the inspection efficiency of the endoscope.
On the basis of the method, this embodiment will be further described from the perspective of an ERCP report generation device, please refer to fig. 6, and fig. 6 specifically describes a schematic structural diagram of the ERCP report generation device provided in this embodiment, which may include:
an obtaining module 601, configured to obtain a medical video to be processed, where the medical video to be processed includes a white light video and an X-ray video;
a calling module 602, configured to call a corresponding trained image and video recognition model according to the type of the medical video to be processed, where the image and video recognition model includes a white light recognition model or an X-ray recognition model;
the recognition module 603 is configured to recognize the white light video based on the trained white light recognition model to obtain a first recognition result, and recognize the X-ray video based on the trained X-ray recognition model to obtain a second recognition result;
a generating module 604, configured to generate an ERCP report according to the first identification result and the second identification result.
Accordingly, embodiments of the present application also provide an electronic device, as shown in fig. 7, which may include radio frequency circuits 701, a memory 702 including one or more computer-readable storage media, an input unit 703, a display unit 704, a sensor 705, an audio circuit 706, a WiFi module 707, a processor 708 including one or more processing cores, and a power supply 709. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. Wherein:
the rf circuit 701 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then sends the received downlink information to the one or more processors 708 for processing; in addition, data relating to uplink is transmitted to the base station. The memory 702 may be used to store software programs and modules, and the processor 708 executes various functional applications and data processing by operating the software programs and modules stored in the memory 702. The input unit 703 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The display unit 704 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof.
The electronic device may also include at least one sensor 705, such as a light sensor, motion sensor, and other sensors. The audio circuitry 706 includes speakers that can provide an audio interface between the user and the electronic device.
WiFi belongs to short-range wireless transmission technology, and the electronic device can help the user send and receive e-mail, browse web pages, access streaming media, etc. through the WiFi module 707, which provides wireless broadband internet access for the user. Although fig. 7 shows the WiFi module 707, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within a range that does not change the essence of the application.
The processor 708 is a control center of the electronic device, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 702 and calling data stored in the memory 702, thereby performing overall monitoring of the mobile phone.
The electronic device also includes a power source 709 (e.g., a battery) for supplying power to various components, which may preferably be logically coupled to the processor 708 via a power management system, such that functions of managing charging, discharging, and power consumption may be performed via the power management system.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 708 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 708 runs the application programs stored in the memory 702, so as to implement the following functions:
acquiring a medical video to be processed, wherein the medical video to be processed comprises a white light video and an X-ray video;
calling corresponding trained image and video recognition models according to the type of the medical video to be processed, wherein the image and video recognition models comprise white light recognition models or X-ray recognition models;
the white light video is recognized based on the trained white light recognition model to obtain a first recognition result, and the X-ray video is recognized based on the trained X-ray recognition model to obtain a second recognition result;
and generating an ERCP report according to the first identification result and the second identification result.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the following functions:
acquiring a medical video to be processed, wherein the medical video to be processed comprises a white light video and an X-ray video;
calling corresponding trained image and video recognition models according to the type of the medical video to be processed, wherein the image and video recognition models comprise white light recognition models or X-ray recognition models;
the white light video is recognized based on the trained white light recognition model to obtain a first recognition result, and the X-ray video is recognized based on the trained X-ray recognition model to obtain a second recognition result;
and generating an ERCP report according to the first identification result and the second identification result.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium may execute the steps in any ERCP report generation method provided in the embodiments of the present application, beneficial effects that can be achieved by any ERCP report generation method provided in the embodiments of the present application may be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The ERCP report generation method, the ERCP report generation device, the electronic device, and the computer-readable storage medium provided in the embodiments of the present application are described in detail above, and specific examples are applied in the description to explain the principles and embodiments of the present application, and the description of the embodiments is only used to help understanding the technical solutions and core ideas of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.
Claims (10)
1. An ERCP report generation method, comprising:
acquiring a medical video to be processed, wherein the medical video to be processed comprises a white light video and an X-ray video;
calling corresponding trained image and video recognition models according to the type of the medical video to be processed, wherein the image and video recognition models comprise white light recognition models or X-ray recognition models;
the white light video is recognized based on the trained white light recognition model to obtain a first recognition result, and the X-ray video is recognized based on the trained X-ray recognition model to obtain a second recognition result;
and generating an ERCP report according to the first identification result and the second identification result.
2. The ERCP report generation method of claim 1, wherein said white light recognition model comprises a diverticulum nipple classification module, a nipple morphology classification module, a first blending module, and a second blending module, and said step of recognizing said white light video based on said trained white light recognition model to obtain a first recognition result comprises:
the diverticulum nipple classifying module identifies white light images in the white light video to obtain diverticulum nipple classifying results in the white light video, the nipple form classifying module identifies the white light images in the white light video to obtain nipple form classifying results in the white light video, the first mixing module identifies the white light video to obtain first intubation tube results corresponding to the white light video and intubation tube tool types in the white light video, and the second mixing module identifies the white light images in the white light video to obtain stone taking tool types and stone identifying results in the white light video.
3. The ERCP report generation method of claim 2 wherein said first blending module includes a pre-cut identification module, a cannula attempt number and time identification module and a cannula tool identification module, and said identifying the kvm by said first blending identification module results in a first cannula result corresponding to the kvm and a cannula tool category in the kvm comprises:
identifying the white light image in the white light video through the pre-cutting identification module to obtain a pre-cutting identification result;
identifying the white light video through the intubation attempt times and time identification module to obtain an intubation image and a first intubation result;
and identifying the cannula image through the cannula tool identification module to obtain the category of the cannula tool.
4. The ERCP report generation method according to claim 1, wherein said X-ray recognition model comprises a pancreatic duct number recognition module, a third hybrid module and a cholecystopancreatic duct stent recognition module, and said step of recognizing said X-ray video based on said trained X-ray recognition model to obtain a second recognition result comprises:
the X-ray video is identified through the pancreatic duct frequency identification module to obtain a corresponding second intubation result in the X-ray video, the X-ray image in the X-ray video is segmented and identified through the third mixed identification module to obtain a bile duct part identification result in the X-ray video, and the X-ray image in the X-ray video is identified through the bile pancreatic duct stent identification module to obtain a bile pancreatic duct stent identification result in the X-ray video.
5. The ERCP report generation method of claim 4, wherein the third blending module comprises a bile duct and endoscope segmentation module and a bile duct site identification module; the step of obtaining the bile duct part identification result in the X-ray video by segmenting and identifying the X-ray image in the X-ray video through the third hybrid identification module comprises the following steps:
segmenting the X-ray image in the X-ray video through the bile duct and endoscope segmentation module to obtain a bile duct diameter segmentation result;
and identifying the bile duct diameter segmentation result through the bile duct part identification module to obtain a bile duct part identification result.
6. The ERCP report generation method of claim 3, wherein said step of identifying said white light video by said intubation attempt number and time identification module to obtain an intubation image and a first intubation result comprises:
identifying the white light video based on the intubation attempt times and the intubation time identification module to obtain intubation attempt times, intubation time and intubation images in the white light video;
determining a first intubation result according to the number of intubation attempts and the intubation time.
7. The ERCP report generation method of claim 3, wherein said identifying the white light video based on the intubation attempt number and time identification module to obtain intubation time in the white light video comprises:
identifying a white light image in the white light video based on the intubation attempt times and the time identification module to obtain first time corresponding to an intubation starting image and second time corresponding to an intubation success image in the white light video;
and obtaining the intubation time according to the first time and the second time.
8. An ERCP report generation apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring medical videos to be processed, and the medical videos to be processed comprise white light videos and X-ray videos;
the calling module is used for calling corresponding trained image and video recognition models according to the type of the medical video to be processed, and the image and video recognition models comprise white light recognition models or X-ray recognition models;
the recognition module is used for recognizing the white light video based on the trained white light recognition model to obtain a first recognition result, and recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result;
and the generating module is used for generating an ERCP report according to the first identification result and the second identification result.
9. An electronic device, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps in the ERCP report generation method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the ERCP report generation method according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210127179.1A CN114464289B (en) | 2022-02-11 | 2022-02-11 | ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210127179.1A CN114464289B (en) | 2022-02-11 | 2022-02-11 | ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114464289A true CN114464289A (en) | 2022-05-10 |
CN114464289B CN114464289B (en) | 2024-06-18 |
Family
ID=81413515
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210127179.1A Active CN114464289B (en) | 2022-02-11 | 2022-02-11 | ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114464289B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010009735A2 (en) * | 2008-07-23 | 2010-01-28 | Dako Denmark A/S | Combinatorial analysis and repair |
CN112652393A (en) * | 2020-12-31 | 2021-04-13 | 山东大学齐鲁医院 | ERCP quality control method, system, storage medium and equipment based on deep learning |
WO2021189952A1 (en) * | 2020-10-21 | 2021-09-30 | 平安科技(深圳)有限公司 | Model training method and apparatus, action recognition method and apparatus, and device and storage medium |
CN113642537A (en) * | 2021-10-14 | 2021-11-12 | 武汉大学 | Medical image recognition method and device, computer equipment and storage medium |
-
2022
- 2022-02-11 CN CN202210127179.1A patent/CN114464289B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010009735A2 (en) * | 2008-07-23 | 2010-01-28 | Dako Denmark A/S | Combinatorial analysis and repair |
WO2021189952A1 (en) * | 2020-10-21 | 2021-09-30 | 平安科技(深圳)有限公司 | Model training method and apparatus, action recognition method and apparatus, and device and storage medium |
CN112652393A (en) * | 2020-12-31 | 2021-04-13 | 山东大学齐鲁医院 | ERCP quality control method, system, storage medium and equipment based on deep learning |
CN113642537A (en) * | 2021-10-14 | 2021-11-12 | 武汉大学 | Medical image recognition method and device, computer equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
安薇;施新岗;孙畅;: "提高ERCP内镜教学水平与规范化培训", 现代医药卫生, no. 05, 13 March 2020 (2020-03-13) * |
李鹏;王拥军;王文海;: "ERCP诊治指南(2018版)", 中国实用内科杂志, no. 11, 1 November 2018 (2018-11-01) * |
Also Published As
Publication number | Publication date |
---|---|
CN114464289B (en) | 2024-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107680684B (en) | Method and device for acquiring information | |
CN110866897B (en) | Image detection method and computer readable storage medium | |
CN109117760B (en) | Image processing method, image processing device, electronic equipment and computer readable medium | |
CN110443794B (en) | Pathological image-based image state determination method, device and system | |
CN111091559A (en) | Depth learning-based auxiliary diagnosis system for small intestine sub-scope lymphoma | |
CN113177928B (en) | Image identification method and device, electronic equipment and storage medium | |
CN111984803B (en) | Multimedia resource processing method and device, computer equipment and storage medium | |
CN113344926B (en) | Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image | |
JPWO2020071086A1 (en) | Information processing equipment, control methods, and programs | |
CN113901764A (en) | Content typesetting method and device, electronic equipment and storage medium | |
CN114464289A (en) | ERCP report generation method and device, electronic equipment and computer readable storage medium | |
CN114494406B (en) | Medical image processing method, device, terminal and computer readable storage medium | |
CN113810757A (en) | Push method and device, electronic equipment and computer storage medium | |
CN116523914B (en) | Aneurysm classification recognition device, method, equipment and storage medium | |
CN113793334B (en) | Equipment monitoring method and equipment monitoring device | |
CN116228593B (en) | Image perfecting method and device based on hierarchical antialiasing | |
CN115187570B (en) | Singular traversal retrieval method and device based on DNN deep neural network | |
CN115345808B (en) | Picture generation method and device based on multi-element information acquisition | |
CN118261851A (en) | Ultrasonic image processing method, device, equipment and medium | |
CN114334114A (en) | Preoperative reminding method and device for endoscopy and storage medium | |
CN114209289A (en) | Automatic evaluation method, automatic evaluation device, electronic equipment and storage medium | |
CN118333563A (en) | Multi-mode data input method and device, terminal equipment and storage medium | |
CN117351250A (en) | Method and device for automatically screening medical image pictures | |
CN113762419A (en) | Focus recognition device of capsule endoscope image | |
CN116994723A (en) | Online triage method, device and equipment based on medical knowledge and deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |