CN114464289B - ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium - Google Patents
ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN114464289B CN114464289B CN202210127179.1A CN202210127179A CN114464289B CN 114464289 B CN114464289 B CN 114464289B CN 202210127179 A CN202210127179 A CN 202210127179A CN 114464289 B CN114464289 B CN 114464289B
- Authority
- CN
- China
- Prior art keywords
- white light
- video
- ray
- identification
- identifying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007459 endoscopic retrograde cholangiopancreatography Methods 0.000 title claims abstract description 73
- 238000000034 method Methods 0.000 title claims abstract description 44
- 210000000013 bile duct Anatomy 0.000 claims description 48
- 210000002445 nipple Anatomy 0.000 claims description 28
- 238000002627 tracheal intubation Methods 0.000 claims description 27
- 206010013554 Diverticulum Diseases 0.000 claims description 17
- 239000004575 stone Substances 0.000 claims description 16
- 210000000277 pancreatic duct Anatomy 0.000 claims description 13
- 210000000496 pancreas Anatomy 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 9
- 210000000941 bile Anatomy 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000003062 neural network model Methods 0.000 abstract description 5
- 238000007689 inspection Methods 0.000 abstract description 4
- 238000012549 training Methods 0.000 description 25
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 210000003459 common hepatic duct Anatomy 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000002183 duodenal effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002504 lithotomy Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 208000016222 Pancreatic disease Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 210000002603 extrahepatic bile duct Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 208000024691 pancreas disease Diseases 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 210000005070 sphincter Anatomy 0.000 description 1
- 238000007464 sphincterotomy Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
- A61B5/0084—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/42—Detecting, measuring or recording for evaluating the gastrointestinal, the endocrine or the exocrine systems
- A61B5/4222—Evaluating particular parts, e.g. particular organs
- A61B5/425—Evaluating particular parts, e.g. particular organs pancreas
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Artificial Intelligence (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Radiology & Medical Imaging (AREA)
- General Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- Optics & Photonics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Physiology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Signal Processing (AREA)
- Dentistry (AREA)
- Bioinformatics & Computational Biology (AREA)
- Fuzzy Systems (AREA)
- Evolutionary Biology (AREA)
- Psychiatry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Endocrinology (AREA)
Abstract
The embodiment of the application provides an ERCP report generating method, an ERCP report generating device, electronic equipment and a computer readable storage medium. According to the application, the medical video to be processed is processed through the neural network model, the report is automatically generated, manual film reading and report writing are not needed, and the inspection efficiency of the endoscope is improved.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to an ERCP report generating method, an ERCP report generating device, an electronic device, and a computer readable storage medium.
Background
With the development of the endoscope technology, the retrograde cholangiopancreatography (endoscopic retrograde cholangiopancreatography, ERCP) through the endoscope has become one of the main methods for diagnosing and treating the biliary-pancreatic diseases. A canonical, accurate, comprehensive ERCP report is critical and provides important information to clinicians, endoscopists, and patients.
In the traditional diagnosis and treatment process based on the ERCP report, after the ERCP report is usually finished by a doctor, the ERCP report is recorded according to medical images obtained in the operation process, the manual examination and review mode consumes more time, the working efficiency is low, and the accuracy of the finally obtained ERCP report result is low due to the time difference between the report writing and the operation and the difference of the working experience and the working habit of an endoscope doctor.
Content of the application
The embodiment of the application provides an ERCP report generation method, an ERCP report generation device, electronic equipment and a computer readable storage medium, which can ensure the accuracy of an ERCP report and improve the inspection efficiency of an endoscope.
In one aspect, the present application provides a method for generating an ERCP report, including:
Acquiring medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos;
According to the type of the medical video to be processed, a corresponding trained image and video recognition model is called, wherein the image and video recognition model comprises a white light recognition model or an X-ray recognition model;
Identifying the white light video based on the trained white light identification model to obtain a first identification result, and identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result;
And generating an ERCP report according to the first identification result and the second identification result.
Optionally, in some possible implementations of the present application, the white light recognition model includes a diverticulum nipple classification module, a nipple morphology classification module, a first mixing module, and a second mixing module, and the step of recognizing the white light video based on the trained white light recognition model to obtain a first recognition result includes:
The diverticulum nipple classification module is used for identifying white light images in the white light video to obtain diverticulum nipple classification results in the white light video, the nipple morphology classification module is used for identifying white light images in the white light video to obtain nipple morphology classification results in the white light video, the first mixing module is used for identifying the white light video to obtain a first intubation result corresponding to the white light video and intubation tool types in the white light video, and the second mixing module is used for identifying the white light images in the white light video to obtain stone extraction tool types and stone identification results in the white light video.
Optionally, in some possible implementations of the present application, the first mixing module includes a pre-cutting identification module, a cannula attempt number and time identification module, and a cannula tool identification module, and the step of identifying, by the first mixing identification module, the white light video to obtain a first cannula result corresponding to the white light video and a cannula tool category in the white light video includes:
the pre-cutting identification module is used for identifying white light images in the white light video to obtain a pre-cutting identification result;
Identifying the white light video through the cannula try times and time identification module to obtain a cannula image and a first cannula result;
And identifying the cannula image through the cannula tool identification module to obtain a cannula tool class.
Optionally, in some possible implementations of the present application, the X-ray identification model includes a pancreatic duct number identification module, a third mixing module, and a biliopancreatic duct bracket identification module, and the step of identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result includes:
And identifying the X-ray video through the pancreatic duct frequency identification module to obtain a corresponding second intubation result in the X-ray video, segmenting and identifying the X-ray image in the X-ray video through the third hybrid identification module to obtain a bile duct part identification result in the X-ray video, and identifying the X-ray image in the X-ray video through the bile duct and pancreas duct support identification module to obtain a bile and pancreas duct support identification result in the X-ray video.
Optionally, in some possible implementations of the present application, the third mixing module includes a bile duct and endoscope segmentation module and a bile duct part identification module; the step of dividing and identifying the X-ray image in the X-ray video by the third hybrid identification module to obtain the identification result of the bile duct part in the X-ray video comprises the following steps:
dividing an X-ray image in the X-ray video through the bile duct and endoscope dividing module to obtain a bile duct diameter dividing result;
And identifying the bile duct diameter segmentation result through the bile duct part identification module to obtain a bile duct part identification result.
Optionally, in some possible implementations of the present application, the step of identifying, by the cannula attempt number and time identifying module, the white light video to obtain a cannula image and a first cannula result includes:
identifying the white light video based on the cannula try times and the time identification module to obtain cannula try times, cannula time and cannula images in the white light video;
and determining a first intubation result according to the intubation attempt times and the intubation time.
Optionally, in some possible implementations of the present application, the step of identifying the white light video based on the cannula attempt number and the time identification module to obtain a cannula time in the white light video includes:
Identifying white light images in the white light video based on the cannula try times and the time identification module to obtain first time corresponding to a cannula starting image and second time corresponding to a cannula success image in the white light video;
and obtaining the cannula time according to the first time and the second time.
In one aspect, the present application provides an ERCP report generating apparatus, comprising:
the acquisition module is used for acquiring medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos;
The calling module is used for calling a corresponding trained image recognition and video model according to the type of the medical video to be processed, wherein the image and video recognition model comprises a white light recognition model or an X-ray recognition model;
the recognition module is used for recognizing the white light video based on the trained white light recognition model to obtain a first recognition result, and recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result;
And the generating module is used for generating an ERCP report according to the first identification result and the second identification result.
In one aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the steps in the ERCP report generating method described above are implemented when the processor executes the program.
In one aspect, the present application provides a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the steps of the ERCP report generation method described above.
The embodiment of the application provides an ERCP report generating method, an ERCP report generating device, electronic equipment and a computer readable storage medium. According to the application, the medical video to be processed is processed through the neural network model, the report is automatically generated, manual film reading and report writing are not needed, and the problem of low accuracy of ERCP report results due to time difference between report writing and inspection operation and difference of working experience and working habit of endoscopists is avoided. Therefore, the ERCP report generating method of the scheme ensures the accuracy of the ERCP report and improves the checking efficiency of the endoscope.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an ERCP generating system according to an embodiment of the present application.
Fig. 2 is a flowchart of an ERCP report generating method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a white light identification module according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an X-ray identification module according to an embodiment of the present application.
Fig. 5 is an ERCP report schematic diagram provided in an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an ERCP report generating apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The terms "comprising" and "having" and any variations thereof, as used in the description, claims and drawings, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used for distinguishing between different objects and not for describing a particular sequential order.
The embodiment of the application provides an ERCP report generation method, an ERCP report generation device, electronic equipment and a computer readable storage medium. The ERCP report generating device may be integrated in an electronic device, which may be a server, or may be a terminal, where the terminal may include a tablet computer, a notebook computer, a personal computer (PC, personal Computer), a mini-processing box, or other devices.
In the present application, the ERCP report is an examination report generated in endoscopic retrograde cholangiopancreatography examination, and the content of the ERCP report includes whether deep biliary intubation is successfully achieved, and the instruments (such as sphincter cutting knife, balloon catheter, etc.) used in intubation, the technique used in the operation process, the specific tool used and the expected result, etc.
In the present application, the image and video recognition models are neural network models that are trained on a large number of samples.
In the present application, the video capture device is a video capture device that can be used for ERCP.
A server and a computer-readable storage medium. The image processing apparatus may be integrated in a server, which may be a device such as a terminal.
Referring to fig. 1, fig. 1 is a schematic view of a scene of an ERCP generating system according to an embodiment of the present application, which is only illustrated by taking an ERCP report generating device integrated in a server 11 as an example, the system may include a database 13, a server 11, and a video capturing device 12, where data interaction is performed between the database 13 and the server 11, and between the server 11 and the video capturing device 12 through a wireless network or a wired network, where:
database 13 may be a local database and/or a remote database, etc.
The server 11 includes, but is not limited to, a tablet, notebook, personal computer (PC, personal Computer), mini-processing box, or other device, etc., which may be a local server and/or a remote server, etc.
The server 11 obtains medical videos to be processed from the video acquisition device 12 or the database 13, obtains the medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos, invokes corresponding trained image and video recognition models according to the types of the medical videos to be processed, wherein the image and video recognition models comprise white light recognition models or X-ray recognition models, recognizes the white light videos based on the trained white light recognition models to obtain a first recognition result, recognizes the X-ray videos based on the trained X-ray recognition models to obtain a second recognition result, and finally generates an ERCP report according to the first recognition result and the second recognition result. According to the application, the medical video to be processed is processed through the neural network model, the report is automatically generated, manual film reading and report writing are not needed, and the inspection efficiency of the endoscope is improved.
It should be noted that, the schematic view of the ERCP report generating system shown in fig. 1 is only an example, and the database, the server and the video acquisition device described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of the system and the appearance of new service views, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems. The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
Referring to fig. 2, fig. 2 is a flowchart of an ERCP report generating method according to an embodiment of the present application, which is applied to the server, and the ERCP report generating method includes:
Step 201: and acquiring medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos.
The server acquires medical videos to be processed from video acquisition equipment or a database, wherein the medical videos are generated in the process of performing endoscopic retrograde cholangiopancreatography examination. Medical videos to be processed include white light videos and X-ray videos.
Step 202: and according to the type of the medical video to be processed, calling a corresponding trained image and video recognition model, wherein the image and video recognition model comprises a white light recognition model or an X-ray recognition model.
The medical video to be processed comprises a white light video and an X-ray video, so that the white light video calls a white light recognition model to recognize, and the X-ray video calls an X-ray recognition model to recognize.
Step 203: and identifying the white light video based on the trained white light identification model to obtain a first identification result, and identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result.
In the embodiment of the application, the white light recognition model can take a large amount of white light images and video data as training samples of the depth neural network, simultaneously acquire marked recognized white light images and white light videos, input the training samples into the white light recognition model to obtain predicted white light images and white light video recognition results, and perform parameter training on the white light images and the white light videos according to the predicted white light images and the white light video recognition results, the marked recognized white light images and the white light videos, the loss function and the loss target to obtain a final white light recognition model.
In the embodiment of the application, the X-ray identification model can take a large amount of X-ray images and video data as training samples of the deep neural network, simultaneously acquire marked identified X-ray images and X-ray videos, input the training samples into the X-ray identification model to obtain a predicted X-ray image and X-ray video identification result, and perform parameter training on the X-ray images and the X-ray videos according to the predicted X-ray image and X-ray video identification result, the marked identified X-ray images and X-ray videos, the loss function and the loss target to obtain a final X-ray identification model.
In one embodiment, the white light recognition model includes a diverticulum nipple classification module, a nipple morphology classification module, a first mixing module, and a second mixing module, and the step of recognizing the white light video based on the trained white light recognition model to obtain a first recognition result includes: the diverticulum nipple classification module is used for identifying white light images in the white light video to obtain diverticulum nipple classification results in the white light video, the nipple morphology classification module is used for identifying white light images in the white light video to obtain nipple morphology classification results in the white light video, the first mixing module is used for identifying the white light video to obtain a first intubation result corresponding to the white light video and intubation tool types in the white light video, and the second mixing module is used for identifying the white light images in the white light video to obtain stone extraction tool types and stone identification results in the white light video.
The method comprises the steps of selecting an image of a large duodenal papilla from a white light video image, selecting and classifying the image by a professional doctor, and dividing the image into a non-diverticulum papilla, a diverticulum side papilla and a diverticulum inner papilla. And taking the classified images as training samples, and carrying out parameter training on the diverticulum nipple classification module by using the classified images to finally obtain the trained diverticulum nipple classification module.
The image of the large duodenal papilla is selected from the white light video image, and classified by a professional doctor into normal papilla and opening and abnormal papilla and opening. And taking the classified images as training samples, and carrying out parameter training on the nipple morphology classification module by using the classified images to finally obtain the trained nipple morphology classification module.
In one embodiment, the first mixing module includes a pre-cutting identification module, a cannula attempt number and time identification module, and a cannula tool identification module, and the step of identifying the white light video by the first mixing identification module to obtain a first cannula result corresponding to the white light video and a cannula tool category in the white light video includes: the pre-cutting identification module is used for identifying white light images in the white light video to obtain a pre-cutting identification result; identifying the white light video through the cannula try times and time identification module to obtain a cannula image and a first cannula result; and identifying the cannula image through the cannula tool identification module to obtain a cannula tool class.
Selecting a precut image by a professional doctor, and performing parameter training on the precut identification module by using the marked precut image to finally obtain a trained precut identification module; selecting white cannula videos by a professional doctor as training samples, marking a beginning cannula image and a cannula success image of each video in the samples, marking corresponding cannula try times of each sample video, and performing parameter training by using LSTM to obtain a cannula try times identification module; and selecting images containing tools used by the intubation, such as a cutting knife, a double guide wire and the like, marking the tools, and taking the marked images as training samples of the intubation tool identification module for parameter training to finally obtain the trained intubation tool identification module.
In one embodiment, the second mixing module includes a stone removing tool recognition module and a stone recognition module, and the step of recognizing the white light image in the white light video by the second mixing module to obtain the stone removing tool category and the stone recognition result in the white light video includes: identifying a white light image in the white light video through the stone extraction tool identification module to obtain a stone extraction tool image; and identifying the white light image through the calculus identification module to obtain a calculus image. Fig. 3 is a schematic structural diagram of a white light recognition module according to an embodiment of the application.
Images containing the lithotomy tool are selected by a specialist as training samples for training the lithotomy tool model, including sphincterotomy, basket, balloon, etc. Carrying out parameter training on the stone picking tool identification module by using the marked sample to finally obtain a trained stone picking tool identification module; and selecting a picture containing the extracted calculus as a sample of the calculus identification model by a professional doctor, and training the calculus identification module through the marked sample to obtain a trained calculus identification module.
In one embodiment, the X-ray identification model includes a pancreatic duct frequency identification module, a third mixing module, and a biliary pancreatic duct stent identification module, and the step of identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result includes: and identifying the X-ray video through the pancreatic duct frequency identification module to obtain a corresponding second intubation result in the X-ray video, segmenting and identifying the X-ray image in the X-ray video through the third hybrid identification module to obtain a bile duct part identification result in the X-ray video, and identifying the X-ray image in the X-ray video through the bile duct and pancreas duct support identification module to obtain a bile and pancreas duct support identification result in the X-ray video.
Selecting X-ray intubation videos as training samples by a professional doctor, marking the corresponding times of entering pancreas tubes of each sample video, and performing parameter training by using LSTM (least squares) to obtain a trained pancreas tube times identification module; the X-ray image is divided into three types of biliary duct stent placement, pancreatic duct stent placement and stent-free placement by a professional doctor, marking is carried out, the marked image is used as a training sample, and parameter training is carried out on the biliary pancreatic duct stent identification module.
In one embodiment, the third hybrid module includes a bile duct and endoscope segmentation module and a bile duct site identification module; the step of dividing and identifying the X-ray image in the X-ray video by the third hybrid identification module to obtain the identification result of the bile duct part in the X-ray video comprises the following steps: dividing an X-ray image in the X-ray video through the bile duct and endoscope dividing module to obtain a bile duct diameter dividing result; and identifying the bile duct diameter segmentation result through the bile duct part identification module to obtain a bile duct part identification result. Fig. 4 is a schematic structural diagram of an X-ray identification module according to an embodiment of the present application.
The bile duct image is marked by a professional doctor, the bile duct and the endoscope in the image are sketched, the marked bile duct image is used as a training sample of the bile duct and the endoscope segmentation module, parameter training is carried out, the trained bile duct and the endoscope segmentation module can automatically calculate the diameters of all parts of the bile duct according to the diameters of the endoscope, and the bile duct is divided into normal, narrow and expanded according to the diameters of the bile duct. The trained bile duct part recognition module obtains a bile duct part recognition result based on a bile duct segmentation result, wherein the bile duct part recognition result is obtained by the bile duct part recognition module, and the bile duct part recognition module is characterized in that the bile duct part recognition module comprises a left hepatic duct and a right hepatic duct; the extrahepatic bile duct is divided into an upper section, a middle section and a lower section, wherein the upper section bile duct is arranged above the junction of the cholecyst duct and the common hepatic duct, the middle section bile duct is arranged from the junction of the cholecyst duct and the common hepatic duct to the junction of the common cholangium and the pancreatic duct, and the lower section bile duct is arranged below the junction of the common cholangium and the pancreatic duct.
In one embodiment, the step of identifying the white light video by the cannula try time and time identifying module to obtain a cannula image and a first cannula result includes: identifying the white light video based on the cannula try times and the time identification module to obtain cannula try times, cannula time and cannula images in the white light video; and determining a first intubation result according to the intubation attempt times and the intubation time.
In one embodiment, the step of identifying the white light video based on the cannula attempt number and the time identification module to obtain the cannula time in the white light video includes: identifying white light images in the white light video based on the cannula try times and the time identification module to obtain first time corresponding to a cannula starting image and second time corresponding to a cannula success image in the white light video; and obtaining the cannula time according to the first time and the second time.
Step 204: and generating an ERCP report according to the first identification result and the second identification result.
And converting the first recognition result and the second recognition result into corresponding characters and reserving corresponding pictures, and finally automatically generating a report. Fig. 5 is a schematic diagram of an ERCP report provided by an embodiment of the present application, where the ERCP report includes images corresponding to important steps in the ERCP operation process, and descriptions of the corresponding images.
The embodiment of the application provides an ERCP report generation method, which processes medical videos to be processed through a neural network model, automatically generates reports, does not need to read sheets and write the reports manually, and avoids the problem of low accuracy of ERCP report results caused by time difference between the writing of the reports and the checking operation and the difference of working experience and working habit of endoscopists. Therefore, the ERCP report generating method of the scheme ensures the accuracy of the ERCP report and improves the checking efficiency of the endoscope.
Based on the method, this embodiment will be further described from the perspective of an ERCP report generating device, referring to fig. 6, fig. 6 specifically describes a schematic structural diagram of an ERCP report generating device provided by the embodiment of the present application, which may include:
The acquisition module 601 is configured to acquire a medical video to be processed, where the medical video to be processed includes a white light video and an X-ray video;
The calling module 602 is configured to call a corresponding trained image and video recognition model according to the type of the medical video to be processed, where the image and video recognition model includes a white light recognition model or an X-ray recognition model;
the recognition module 603 is configured to recognize the white light video based on the trained white light recognition model to obtain a first recognition result, and recognize the X-ray video based on the trained X-ray recognition model to obtain a second recognition result;
and the generating module 604 is configured to generate an ERCP report according to the first identification result and the second identification result.
Accordingly, an embodiment of the present application also provides an electronic device, as shown in fig. 7, where the electronic device may include a radio frequency circuit 701, a memory 702 including one or more computer readable storage media, an input unit 703, a display unit 704, a sensor 705, an audio circuit 706, a WiFi module 707, a processor 708 including one or more processing cores, and a power supply 709. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 7 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the radio frequency circuit 701 can be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, in particular, after receiving downlink information of a base station, the downlink information is processed by one or more processors 708; in addition, data relating to uplink is transmitted to the base station. The memory 702 may be used to store software programs and modules, and the processor 708 may perform various functional applications and data processing by executing the software programs and modules stored in the memory 702. The input unit 703 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The display unit 704 may be used to display information input by a user or information provided to a user and various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof.
The electronic device may also include at least one sensor 705, such as a light sensor, a motion sensor, and other sensors. The audio circuitry 706 includes speakers that may provide an audio interface between the user and the electronic device.
WiFi belongs to a short-distance wireless transmission technology, and the electronic equipment can help a user to send and receive emails, browse webpages, access streaming media and the like through the WiFi module 707, so that wireless broadband Internet access is provided for the user. Although fig. 7 shows a WiFi module 707, it is to be understood that it is not a necessary component of an electronic device, and may be omitted entirely as needed within a range that does not change the essence of the application.
The processor 708 is the control center of the electronic device, connects the various parts of the overall handset using various interfaces and lines, and performs various functions of the electronic device and processes the data by running or executing software programs and/or modules stored in the memory 702, and invoking data stored in the memory 702, thereby performing overall monitoring of the handset.
The electronic device also includes a power supply 709 (e.g., a battery) for powering the various components, which may be logically connected to the processor 708 by a power management system, such as to perform functions such as managing charge, discharge, and power consumption by the power management system.
Although not shown, the electronic device may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the processor 708 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 702 according to the following instructions, and the processor 708 executes the application programs stored in the memory 702, so as to implement the following functions:
Acquiring medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos;
According to the type of the medical video to be processed, a corresponding trained image and video recognition model is called, wherein the image and video recognition model comprises a white light recognition model or an X-ray recognition model;
Identifying the white light video based on the trained white light identification model to obtain a first identification result, and identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result;
And generating an ERCP report according to the first identification result and the second identification result.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of an embodiment that are not described in detail in the foregoing embodiments may be referred to in the foregoing detailed description, which is not repeated herein.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the functions of:
Acquiring medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos;
According to the type of the medical video to be processed, a corresponding trained image and video recognition model is called, wherein the image and video recognition model comprises a white light recognition model or an X-ray recognition model;
Identifying the white light video based on the trained white light identification model to obtain a first identification result, and identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result;
And generating an ERCP report according to the first identification result and the second identification result.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium may perform steps in any ERCP report generating method provided by the embodiments of the present application, so that the beneficial effects that any ERCP report generating method provided by the embodiments of the present application can be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing describes in detail the method, the apparatus, the electronic device and the computer readable storage medium for generating an ERCP report according to the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing description of the embodiments is only for helping to understand the technical solution and core idea of the present application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
Claims (8)
1. An ERCP report generation method, comprising:
Acquiring medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos;
According to the type of the medical video to be processed, a corresponding trained image and video recognition model is called, wherein the image and video recognition model comprises a white light recognition model or an X-ray recognition model;
Identifying the white light video based on the trained white light identification model to obtain a first identification result, and identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result;
Generating an ERCP report according to the first identification result and the second identification result;
The white light identification model comprises a diverticulum nipple classification module, a nipple morphology classification module, a first mixed identification module and a second mixed identification module, and the step of identifying the white light video based on the trained white light identification model to obtain a first identification result comprises the following steps:
The diverticulum nipple classification module is used for identifying white light images in the white light video to obtain diverticulum nipple classification results in the white light video, the nipple morphology classification module is used for identifying white light images in the white light video to obtain nipple morphology classification results in the white light video, the first mixed identification module is used for identifying the white light video to obtain a first intubation result corresponding to the white light video and intubation tool types in the white light video, and the second mixed identification module is used for identifying the white light images in the white light video to obtain stone extraction tool types and stone identification results in the white light video;
The X-ray identification model comprises a pancreatic duct frequency identification module, a third mixed identification module and a biliary pancreatic duct bracket identification module, and the step of identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result comprises the following steps:
and the pancreas duct frequency identification module is used for identifying the X-ray video to obtain a corresponding second intubation result in the X-ray video, the third hybrid identification module is used for dividing and identifying the X-ray image in the X-ray video to obtain a bile duct part identification result in the X-ray video, and the bile pancreas duct support identification module is used for identifying the X-ray image in the X-ray video to obtain a bile pancreas duct support identification result in the X-ray video.
2. The ERCP report generation method of claim 1, wherein the first hybrid identification module comprises a pre-cut identification module, a cannula attempt number and time identification module, and a cannula tool identification module, the step of identifying the white light video by the first hybrid identification module to obtain a first cannula result corresponding to the white light video and a cannula tool category in the white light video comprises:
the pre-cutting identification module is used for identifying white light images in the white light video to obtain a pre-cutting identification result;
Identifying the white light video through the cannula try times and time identification module to obtain a cannula image and a first cannula result;
And identifying the cannula image through the cannula tool identification module to obtain a cannula tool class.
3. The ERCP report generation method of claim 1, wherein the third hybrid identification module comprises a bile duct and endoscope segmentation module and a bile duct site identification module; the step of dividing and identifying the X-ray image in the X-ray video by the third hybrid identification module to obtain the identification result of the bile duct part in the X-ray video comprises the following steps:
dividing an X-ray image in the X-ray video through the bile duct and endoscope dividing module to obtain a bile duct diameter dividing result;
And identifying the bile duct diameter segmentation result through the bile duct part identification module to obtain a bile duct part identification result.
4. The ERCP report generation method of claim 2, wherein the step of identifying the white light video by the cannula attempt number and time identification module to obtain a cannula image and a first cannula result comprises:
identifying the white light video based on the cannula try times and the time identification module to obtain cannula try times, cannula time and cannula images in the white light video;
and determining a first intubation result according to the intubation attempt times and the intubation time.
5. The ERCP report generation method of claim 2, wherein the step of identifying the white light video based on the cannula attempt number and time identification module to obtain cannula time in the white light video comprises:
Identifying white light images in the white light video based on the cannula try times and the time identification module to obtain first time corresponding to a cannula starting image and second time corresponding to a cannula success image in the white light video;
and obtaining the cannula time according to the first time and the second time.
6. An ERCP report generating apparatus, comprising:
the acquisition module is used for acquiring medical videos to be processed, wherein the medical videos to be processed comprise white light videos and X-ray videos;
The calling module is used for calling a corresponding trained image and video identification model according to the type of the medical video to be processed, wherein the image and video identification model comprises a white light identification model or an X-ray identification model;
the recognition module is used for recognizing the white light video based on the trained white light recognition model to obtain a first recognition result, and recognizing the X-ray video based on the trained X-ray recognition model to obtain a second recognition result;
the generation module is used for generating an ERCP report according to the first identification result and the second identification result;
The white light identification model comprises a diverticulum nipple classification module, a nipple morphology classification module, a first mixed identification module and a second mixed identification module, and the step of identifying the white light video based on the trained white light identification model to obtain a first identification result comprises the following steps:
The diverticulum nipple classification module is used for identifying white light images in the white light video to obtain diverticulum nipple classification results in the white light video, the nipple morphology classification module is used for identifying white light images in the white light video to obtain nipple morphology classification results in the white light video, the first mixed identification module is used for identifying the white light video to obtain a first intubation result corresponding to the white light video and intubation tool types in the white light video, and the second mixed identification module is used for identifying the white light images in the white light video to obtain stone extraction tool types and stone identification results in the white light video;
The X-ray identification model comprises a pancreatic duct frequency identification module, a third mixed identification module and a biliary pancreatic duct bracket identification module, and the step of identifying the X-ray video based on the trained X-ray identification model to obtain a second identification result comprises the following steps:
and the pancreas duct frequency identification module is used for identifying the X-ray video to obtain a corresponding second intubation result in the X-ray video, the third hybrid identification module is used for dividing and identifying the X-ray image in the X-ray video to obtain a bile duct part identification result in the X-ray video, and the bile pancreas duct support identification module is used for identifying the X-ray image in the X-ray video to obtain a bile pancreas duct support identification result in the X-ray video.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the ERCP report generation method of any of claims 1 to 5 when the computer program is executed by the processor.
8. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of the ERCP report generation method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210127179.1A CN114464289B (en) | 2022-02-11 | 2022-02-11 | ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210127179.1A CN114464289B (en) | 2022-02-11 | 2022-02-11 | ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114464289A CN114464289A (en) | 2022-05-10 |
CN114464289B true CN114464289B (en) | 2024-06-18 |
Family
ID=81413515
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210127179.1A Active CN114464289B (en) | 2022-02-11 | 2022-02-11 | ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114464289B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010009735A2 (en) * | 2008-07-23 | 2010-01-28 | Dako Denmark A/S | Combinatorial analysis and repair |
CN112257579B (en) * | 2020-10-21 | 2024-10-15 | 平安科技(深圳)有限公司 | Model training method, motion recognition method, device, equipment and storage medium |
CN112652393B (en) * | 2020-12-31 | 2021-09-07 | 山东大学齐鲁医院 | ERCP quality control method, system, storage medium and equipment based on deep learning |
CN113642537B (en) * | 2021-10-14 | 2022-01-04 | 武汉大学 | Medical image recognition method and device, computer equipment and storage medium |
-
2022
- 2022-02-11 CN CN202210127179.1A patent/CN114464289B/en active Active
Non-Patent Citations (2)
Title |
---|
ERCP诊治指南(2018版);李鹏;王拥军;王文海;;中国实用内科杂志;20181101(11);全文 * |
提高ERCP内镜教学水平与规范化培训;安薇;施新岗;孙畅;;现代医药卫生;20200313(05);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114464289A (en) | 2022-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111817943B (en) | Data processing method and device based on instant messaging application | |
CN107680684B (en) | Method and device for acquiring information | |
JP6030240B2 (en) | Method and apparatus for face recognition | |
EP3872652A2 (en) | Method and apparatus for processing video, electronic device, medium and product | |
CN111091559A (en) | Depth learning-based auxiliary diagnosis system for small intestine sub-scope lymphoma | |
CN110992989B (en) | Voice acquisition method and device and computer readable storage medium | |
CN108174236A (en) | A kind of media file processing method, server and mobile terminal | |
CN113177928A (en) | Image identification method and device, electronic equipment and storage medium | |
CN114120969A (en) | Method and system for testing voice recognition function of intelligent terminal and electronic equipment | |
CN111984803B (en) | Multimedia resource processing method and device, computer equipment and storage medium | |
US12075969B2 (en) | Information processing apparatus, control method, and non-transitory storage medium | |
CN114464289B (en) | ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium | |
CN113344926B (en) | Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image | |
CN113901764A (en) | Content typesetting method and device, electronic equipment and storage medium | |
CN117174231A (en) | Intelligent medical record management system | |
CN111866548A (en) | Marking method applied to medical video | |
CN114093454A (en) | Image lower limb artery structured report writing design method and system | |
CN116523914B (en) | Aneurysm classification recognition device, method, equipment and storage medium | |
CN113842166A (en) | Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device | |
CN113743282B (en) | Content searching method, device, electronic equipment and computer readable storage medium | |
CN114154465B (en) | Structure reconstruction method and device of structure diagram, electronic equipment and storage medium | |
CN113823283B (en) | Information processing method, apparatus, storage medium, and program product | |
JP2007058625A (en) | Information processor, information processing method and computer program | |
JP2020181422A (en) | Medical diagnosis assistance system, medical diagnosis assistance device, medical diagnosis assistance method, and program | |
CN114089877B (en) | Application control method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |