CN112932663A - Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy - Google Patents

Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy Download PDF

Info

Publication number
CN112932663A
CN112932663A CN202110228978.3A CN202110228978A CN112932663A CN 112932663 A CN112932663 A CN 112932663A CN 202110228978 A CN202110228978 A CN 202110228978A CN 112932663 A CN112932663 A CN 112932663A
Authority
CN
China
Prior art keywords
surgical
cvs
stage
model
anatomical structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110228978.3A
Other languages
Chinese (zh)
Other versions
CN112932663B (en
Inventor
刘杰
王昕�
安晶晶
刘润文
游嘉颖
程科
官敬业
王玉贤
吴少南
刘遵宇
何刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Yurui Innovation Technology Co ltd
Original Assignee
Chengdu Yurui Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Yurui Innovation Technology Co ltd filed Critical Chengdu Yurui Innovation Technology Co ltd
Priority to CN202110228978.3A priority Critical patent/CN112932663B/en
Publication of CN112932663A publication Critical patent/CN112932663A/en
Application granted granted Critical
Publication of CN112932663B publication Critical patent/CN112932663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Abstract

The invention relates to an intelligent auxiliary method and system for improving celioscope cholecystectomy safety, which comprises the steps of inputting images collected in real time into an artificial intelligent computer model through a celioscope for processing, and obtaining quantitative data of surgical elements; inputting the operation elements into a safe operation model for analysis, and outputting guidance elements for reducing iatrogenic biliary tract injury, namely anatomical structure real-time markers and CVS real-time scores; and performing a surgical operation by the surgeon according to the anatomical structure real-time mark and the indication of CVS real-time mark. The invention is combined with the laparoscope machine used in the market at present, provides the intelligent assessment and feedback of important intraoperative anatomical information and CVS completion conditions, can help the inpatients and the specialist to establish the intraoperative safety operation consciousness and cultivate the standardized operation skill in the teaching hospital; in primary hospitals or areas with underdeveloped medical treatment, the output of expert techniques can be realized, and local doctors are assisted to complete operations safely and efficiently.

Description

Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy
Technical Field
The invention relates to the technical field of image recognition processing, in particular to an intelligent auxiliary method, system and system for improving safety of laparoscopic cholecystectomy.
Background
Cholecystectomy is one of the most performed operations by general surgeons in the world at present. With the continuous development and popularization of minimally invasive techniques, laparoscopic cholecystectomy has become the "gold standard" for the treatment of various symptomatic gallstones/polyps. However, iatrogenic biliary tract injury (BDI) remains an inexhaustible problem for all general surgeons, both in the open surgery era and in the laparoscopic surgery era. Statistically, there are about seventy-five to one million people who receive cholecystectomy in the united states each year, and over three million patients who receive cholecystectomy in china each year. Research data show that the biliary tract injury incidence rate in laparoscopic cholecystectomy is about 0.3% -0.5%.
The main causes of iatrogenic biliary tract injury are the following three aspects: 1. surgical techniques and experience: when some young doctors independently carry out laparoscopic cholecystectomy, good operation safety consciousness and habit are not formed, and once complicated conditions (inflammation, bleeding and adhesion) in the operation are met, the operation is often lost and mistaken injury occurs; 2. anatomical variation: the biliary tract system is complicated, the operation difficulty is increased due to various anatomical variations of a few patients, and the cognitive deviation of the doctor on the bile duct deformation due to thinking inertia causes damage; 3. the pathological type is as follows: in patients with acute cholecystitis secondary to cholelithiasis, the operation difficulty is increased along with hyperemia and edema of tissues and inflammatory adhesion, and the factor is also one of high-risk factors of biliary tract injury. However, the existing method for reducing the biliary tract injury mainly needs to be achieved through expert experience, which needs long-time accumulation of doctors, but the actual experience of most doctors in real life is insufficient, so that how to reduce the biliary tract injury in the cholecystectomy process through an artificial intelligence mode is a problem which needs to be solved urgently at the present stage.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides an intelligent auxiliary method and an intelligent auxiliary system for improving the safety of laparoscopic cholecystectomy, and overcomes the defect that the conventional cholecystectomy can only be judged by expert experience.
The purpose of the invention is realized by the following technical scheme: an intelligent assistance method and system for improving safety of laparoscopic cholecystectomy, the intelligent assistance method comprising:
the method comprises the following steps of firstly, obtaining an artificial intelligence computer model for identifying surgical elements and a safe surgical model based on medical application through labeled data training, wherein the safe surgical model is established based on the surgical elements obtained by the artificial intelligence computer model;
inputting the images acquired in real time into an artificial intelligent computer model through a laparoscope for processing, and acquiring quantitative data of surgical elements;
inputting the operation elements into a safe operation model for analysis, and outputting guidance elements for reducing iatrogenic biliary tract injury, namely anatomical structure real-time markers and CVS real-time scores;
and fourthly, performing surgical operation according to the anatomical structure real-time mark and the CVS real-time scoring indication, and prompting in real time in the surgical process.
The surgical elements include: surgical instruments, anatomy, surgical stage, and CVS score; and establishing a safe operation model based on medical application based on the relation among different operation stages, operation instruments, anatomical structures and CVS scores and by combining the time length of each operation stage, the total time length and the prior operation experience.
The construction step of the artificial intelligence computer model comprises the following steps:
collecting a laparoscopic surgery video and labeling the picture and the video to obtain a surgery labeling database;
and respectively constructing a surgical instrument detection model, an anatomical structure detection model, a CVS (composite surgical system) grading model and a surgical stage identification model through a YOLO v4 algorithm, an EfficientNet algorithm and ResNet + LSTM according to the surgical instrument labeling data, the anatomical structure labeling data, the CVS grading labeling data and the surgical stage labeling data in the surgical labeling database.
The safe operation model construction step comprises the following steps:
identifying surgical elements according to the surgical instrument detection model, the anatomical structure detection model, the CVS scoring model and the surgical stage identification model obtained in the artificial intelligence computer model construction step;
and (3) constructing a safe operation model of the cholecystectomy anatomical structure, the use of a cholecystectomy instrument, the cholecystectomy stage relation, the cholecystectomy operation duration calculation and the CVS score numerical relation by combining expert experience, guidelines and treatises.
The collecting the video of the laparoscopic surgery and labeling the picture and the video to obtain a surgery labeling database comprises:
a1, extracting the operation video by FFmpeg software according to a rule of 1 frame per second to obtain an operation picture library, and carrying out anatomical structure, CVS (composite video surveillance system) grading and manual labeling of an operation instrument on the extracted picture by Labelme software;
a2, transcoding the operation video according to a uniform format through FFmpeg to obtain an operation video library, and labeling each operation stage in the video;
a3, establishing a surgical annotation database according to the annotation data obtained in the step A1 and the step A2.
The anatomical structure comprises a cystic duct, a cystic artery, a common bile duct, a cystic bed and a hepatic and biliary trigone area; the CVS scoring related areas comprise a hepatic and biliary trigone area, a lower third area of a gall bladder bed and a cystic duct/gall bladder artery two-duct area; the surgical instrument comprises a stabbing card, a non-traumatic grasping forceps, an electric hook, a clip applier, an absorbable clip, an iron clip, a Maryland forceps, a gauze, a straight separating forceps, a scissors, a big grasping forceps, a separating forceps, a specimen bag, a puncture needle, a suction apparatus, an electric coagulation, a drainage tube and a wave forceps; the operation stage comprises a pneumoperitoneum establishing stage, an adhesion releasing stage, a liver and gall triangle separating stage, a free gall bladder bed stage, an operation area cleaning stage and a gall bladder taking stage.
The method for constructing the surgical instrument detection model, the anatomical structure detection model, the CVS scoring model and the surgical stage identification model respectively through the YOLO v4 algorithm and the ResNet + LSTM according to the surgical instrument annotation data, the anatomical structure annotation data, the CVS scoring annotation data and the surgical stage annotation data in the surgical annotation database comprises the following steps:
constructing a surgical instrument detection model and an anatomical structure detection model according to the surgical instrument labeling data, the anatomical structure labeling data and the CVS scoring labeling data through a YOLO v4 algorithm;
and constructing a CVS scoring model through EffcientNet according to the CVS scoring marking data.
And constructing a surgical stage identification model through ResNet + LSTM according to the surgical stage marking data.
The detection and identification step of the surgical instrument detection model comprises the following steps: detecting various instruments appearing in the picture, and performing mutual exclusion processing on the detected instruments; the mutual exclusion treatment comprises only one of straight separating pliers, separating pliers and Maryland pliers which are possible to appear simultaneously; the clip applier and scissors cannot occur at the same time; no traumatic graspers and wave forceps can occur at the same time; the electric hook, the clip applier, the scissors, the large grasping forceps, the suction apparatus and the electric coagulation can only occur once at the same time;
the step of detecting and identifying the anatomical structure detection model comprises the following steps: detecting various anatomical structures appearing in the picture, and performing exclusive processing on the detection results, wherein the same anatomical structure is possible to appear in one anatomical structure at the same time;
the grading step of the CVS grading model comprises the following steps: and putting the pictures to be scored into an EfficientNet network, and scoring the CVS completion degree.
The detection and identification of the operation stage identification model comprises the following steps:
putting n frames of continuous pictures into a ResNet network for feature extraction to obtain feature vectors;
and putting a plurality of feature vectors into an LSTM network, performing stage identification on the last picture in the continuous pictures, and performing stage post-processing on all the stages so far.
The stage post-processing comprises the following steps:
denoising through a sliding window, and denoising according to the duration;
in the identification process between the first homogeneous stages, supplementing the part which is not at any stage into the adjacent stage, and then carrying out clamping denoising;
adjusting the abnormal position according to the expert opinion;
the part identified as being without a phase is replaced with the phase at the previous moment.
A system based on an intelligent assistance method of reducing laparoscopic cholecystectomy biliary tract injury, the system comprising:
an image acquisition module configured to acquire a laparoscopic cholecystectomy video stream resulting in a sequence of video frames;
the device detection module is configured to utilize the CSPDarkNet network to extract features, obtain a feature map as the input of a subsequent detection network, and finally obtain the device types appearing in the corresponding video frames;
the organ detection module is configured to extract features by using a CSPDarkNet network, obtain a feature map as the input of a subsequent detection network, and finally obtain the organ position appearing in a corresponding video frame;
and the anatomical element region detection module is configured to utilize the CSPDarkNet network to extract features, obtain a feature map as the input of a subsequent detection network, and finally obtain the position of the anatomical element region appearing in the corresponding video frame.
The CVS scoring module is configured to extract image features by using a network formed by CNN convolution, the obtained feature map is used as the input of a subsequent full-connection network, and finally the score of the CVS in the corresponding video frame is obtained;
the stage identification module is configured to perform feature extraction on the N images by using a ResNet network, obtain a feature map as the input of a subsequent long-time and short-time memory network, and finally obtain an operation stage in which the Nth image is positioned;
and the system output module outputs various information including the current surgical stage, the surgical instrument in use, the correct anatomical structure and the CVS scoring condition through a display screen.
The application has the following beneficial effects:
1. the method and the system adopt an artificial intelligence computer model to identify key surgical elements such as surgical equipment, an anatomical structure, a surgical stage and the exposure condition of a real-time safety visual field in the surgery, and simultaneously collect and solidify the multidimensional requirement for carrying out the safety surgery to establish a safety surgery model. The two models are both established by adopting a deep neural network, in the actual operation process, the extracted key operation elements and the operation safety model are calculated and matched, the real-time calculation is carried out in the operation process, and the prompt is carried out in the form of pictures and texts, so that the operation safety of a surgeon is effectively assisted.
2. During the surgical procedure, inaccurate anatomical recognition and poor understanding of the surgical procedure by the surgeon may lead to erroneous decisions and surgical injuries. The method creatively takes the exposure conditions of surgical instruments, anatomical structures, surgical stages and real-time safety vision in the surgery as identification objects, adopts an artificial intelligence deep learning method to carry out quantitative analysis on the information, and indicates the correct anatomical position through a safety surgery model based on medical application to guide surgery operation, prompt risks possibly existing in the surgery and reduce the probability of surgery injury.
In addition, the invention can be used online in real time during the operation process, or used for performing safety evaluation on the operation video offline after the operation is completed. In a teaching hospital, the invention can help the inpatients and the specialist to establish the safety operation consciousness in the operation and cultivate the standardized operation skill; in primary hospitals or areas with underdeveloped medical treatment, the invention can realize the output of expert technology and assist local doctors to complete operations safely and efficiently.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of a data annotation process;
FIG. 3 is a schematic diagram of a flow chart of an artificial intelligence computer model construction;
FIG. 4 is a structural diagram of a model of the YOLOv4 algorithm;
FIG. 5 is a schematic diagram of an EfficientNet algorithm model structure;
FIG. 6 is a diagram showing the structure of the ResNet + LSTM algorithm model.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments of the present application provided below in connection with the appended drawings is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention relates to an intelligent assistance method and system for improving safety of laparoscopic cholecystectomy, which specifically includes the following contents:
collecting a laparoscopic surgery video and labeling the picture and the video to obtain a surgery labeling database; training through the marked data to obtain an artificial intelligence computer model for identifying the surgical elements and a safe surgical model based on medical application, wherein the safe surgical model is established based on the surgical elements obtained by the artificial intelligence computer model;
respectively constructing artificial intelligent computer models such as a surgical instrument detection model, an anatomical structure detection model, a CVS (surgery safety view) scoring and marking model, a surgery stage identification model and the like through a YOLO v4 algorithm and ResNet + LSTM according to surgical instrument labeling data, anatomical structure labeling data, CVS scoring and marking data and surgery stage labeling data in a surgery labeling database;
identifying surgical elements according to the surgical instrument detection model, the anatomical structure detection model, the CVS scoring model and the surgical stage identification model obtained in the artificial intelligence computer model construction step;
constructing a safe operation model of cholecystectomy dissection, cholecystectomy instrument use, cholecystectomy stage relation, cholecystectomy operation duration calculation and CVS (constant value surgical system) score numerical relation by combining expert experience, guidelines and treatises;
inputting the images acquired in real time into an artificial intelligent computer model through a laparoscope for processing, and acquiring quantitative data of surgical elements;
inputting the operation elements into a safe operation model for analysis, and outputting guidance elements for reducing iatrogenic biliary tract injury, namely anatomical structure real-time markers and CVS real-time scores; wherein the real-time markers of the anatomical structures comprise real-time markers of cystic artery, cystic duct, cystic bed, common bile duct structure and hepatic and biliary trigone; the CVS real-time scoring comprises calculating CVS exposure conditions including exposure conditions of gallbladder triangles, free conditions of gallbladder beds and connection conditions of gallbladders and pipelines in real time and scoring;
and performing operation according to the anatomical structure real-time mark and the indication of CVS real-time scoring, and giving a red light prompt to an operator and a green light prompt after the CVS is finished under the condition that the CVS is not finished. If the system detects disconnection of the associated instrument (clip applier, absorbable clip) from the operation without completing the CVS, the system sounds an alarm.
As shown in fig. 2, the annotations for video collection are collected according to the requirement that each surgical video requires a resolution of not less than 720 x 560 and not less than 25 frames per second; and marking the collected pictures by adopting the following process, wherein the main content of marking is important anatomical structures, CVS (composite visual system) scoring standards, instruments and operation stages.
All surgical videos were extracted using FFmpeg software on a1 frame per second basis in the anatomy, CVS assessment and surgical instrument section. And then, carrying out the artificial labeling of the anatomical structure, the CVS score and the surgical instrument on the obtained picture after the frame extraction by applying Labelme software. Wherein the anatomical structure labels comprise five structures of a cystic duct, a cystic artery, a common bile duct, a cystic bed and a liver and gall trigone; the CVS scoring related areas comprise a liver and gall triangle area, a lower third area of a gall bladder bed and two pipeline areas of a gall bladder duct and a gall bladder artery, and different completion conditions of the CVS are scored according to the international standard of the CVS.
The surgical instruments are marked with 18 surgical instruments including stabbing cards, non-traumatic grasping forceps, electric hooks, clip appliers, absorbable clips, iron clips, Maryland forceps, gauze, straight separating forceps, scissors, large grasping forceps, separating forceps, specimen bags, puncture needles, suction apparatuses, electric coagulation, drainage tubes and wave forceps.
In the operation stage part, all operation videos are uniformly transcoded into mpeg-4 format through FFmpeg software. Video staged labeling is performed by using Anvil Video Annotation Research Tool software, wherein the operation stage is divided into the following steps according to the early definition and the later practical application requirement: 1. establishing pneumoperitoneum; 2. releasing adhesion; 3. separating liver and gallbladder triangles; 4. free gallbladder bed; 5. cleaning a surgical area; 6. six parts of the gallbladder are taken out. After the manual labeling phase, training was also performed by the FFmpeg software frame-per-second. All the manual labeling and scoring of all the parts are controlled by 6 qualified specialists for hepatobiliary pancreas surgery after early training, unqualified labeling and scoring are returned to be modified, and qualified labeling can be used as subsequent machine learning data.
As shown in FIG. 3, the artificial intelligence computer model is primarily a machine learning model of the anatomy, CVS exposure score, instruments and surgical stage. And the marked anatomical structure, the CVS exposure score and the surgical instrument and surgical stage data are randomly distributed into a training set, a verification set and a test set according to the ratio of 8:1: 1. All models were developed using the Anaconda, Qt Creator platform, and image processing using the NVIDIA Tesla V100 graphics processor.
As shown in fig. 4, YOLOv4 first uses a CSPDarkNet53 network to extract image features, and finally realizes target detection under multiple scales through three times of continuous upsampling, and the model also uses SPP-block, SAM-block, and PAN path-aggregation block in the detection process. In the model training process, Mosaic data authentication, DropBlock authentication, Multi weighted residual connections, CIoU Loss, CmBN, Cosine connecting scheduler, Random routing maps/Dynamic mini-batch size and DIoU-NMS are adopted.
The CSPDarkNet feature extraction network consists of a Conv-BN-Mish module, a Residual module and a CSP module. The Conv-BN-Mish module consists of a convolution layer, a batch normalization layer and a Mish activation function; the Residual modules are used for carrying out tensor addition calculation on calculation results of different levels, and one Residual module comprises 1, 2, 4 or 8 Conv-BN-Mish modules; the CSP module is used for carrying out two different calculations on the same characteristic diagram, wherein one calculation is carried out only through convolution calculation of a convolution kernel with the size of 1 x 1, the other calculation is carried out through convolution calculation with the size of a convolution kernel of 1 x 1, then through 1 or more Residual modules, and finally the results of the two calculations are spliced; the Mish activation function is used to increase the model non-linear performance.
The CSP module represents a data flow, and since the convolution network calculation is a dot product using tensor, the input dimensions are 4N, C, H and W, which means the number of pictures, the number of picture channels, the height of pictures, and the width of pictures, in turn, and the CSP module will concatenate two tensors, N, C1, H, W, N, C2, and H, W, into N, C1+ C2 and H, W as outputs, which operate on tensor, concatenate multiple arrays, and expand in one dimension, which will be referred to as channel concatenation below.
Figure 355877DEST_PATH_IMAGE001
The above formula of convolution calculation, wherein y represents the convolution calculation output, n represents the number of neurons, wiWeight, x, of the ith neuroniRepresenting the input data of the ith neuron, and b adding an offset to the calculation result.
Figure 210701DEST_PATH_IMAGE002
The above equation is a batch normalization layer calculation formula, where BN represents the batch normalization calculation output, x represents the input data,
Figure 177388DEST_PATH_IMAGE003
the mean value of the x tensor is represented,
Figure 510281DEST_PATH_IMAGE004
the variance of the x tensor is expressed, zeta represents a very small parameter to ensure that the denominator is not 0, and gamma and beta are learnable coefficients.
Figure 777183DEST_PATH_IMAGE005
The above formula is a formula of a Mish activation function, wherein MA represents computation output of the Mish activation function, and x represents input data.
Figure 68487DEST_PATH_IMAGE006
The above formula is a Residual module calculation formula, wherein
Figure 666346DEST_PATH_IMAGE007
Representing the result of the calculation by the Residual module, x representing the input data,
Figure 52197DEST_PATH_IMAGE008
indicating that the data x has undergone other computational operations such as the computation of multiple Conv-BN-hash modules.
The tensor of N, C, H2 and W2 is obtained by convolution calculation of the tensor of N, C, H1 and W1, wherein H1 is less than H2, and W2 is less than W1; SPP-block represents that the same characteristic diagram passes through Conv-BN-Mish modules with different numbers and sizes of convolution kernels, and finally, the result of channel splicing operation of the results is used as input; the SAM-block represents a weighting operation, a tensor x is put into a convolutional layer to be calculated to obtain a tensor with a shape invariant, the tensor is constrained, namely a sigmoid activation function is used, finally, the result of the sigmoid activation function and the x are subjected to dot multiplication, and the dot multiplication result is finally output by the SAM-block; the PAN path-aggregation block indicates that the route operation in the detection process is replaced by the addition operation into the channel splicing operation, wherein the route operation is similar to the operation of a Residual module and is also a data circulation mode.
The method comprises the steps of randomly selecting a graph, randomly selecting an area in the graph, randomly selecting three graphs, determining the width and the height according to the graph, and finally splicing the graphs into a square as the input of a network; DropBlock regularization is a modified version of Dropout, which will randomly ignore one or more rectangular regions in the network computation profile in network computation; the multi-input weighted residual connections represent bidirectional cross-scale connection and weighted feature fusion, namely a deeper network feature map is used for weighting a shallower network feature map, and finally the weighted feature map and a feature map without a weighted layer are used for weighting; the CIoU Loss is expressed as a Loss calculation mode, and IoU and central store offset of a detected rectangular frame and a labeled rectangular frame and the aspect ratio of a target to a prediction result are calculated; the CmBN is capable of carrying out large-batch picture training on a small machine, a large batch of data is formed by multiple times of small-batch picture training for training, and the picture size of the Cosine and connecting scheduler, the Random drawing maps/Dynamic mini-batch size shows that pictures input every 1000 times of iteration are different in size and are selected in (416, 608); the DIoU-NMS is used to determine the degree of matching of the two rectangular boxes.
Figure 658759DEST_PATH_IMAGE009
Figure 104653DEST_PATH_IMAGE010
Figure 796665DEST_PATH_IMAGE011
Which represents the entirety of multi input weighted residual connections, wherein,
Figure 720628DEST_PATH_IMAGE012
Figure 181696DEST_PATH_IMAGE013
Figure 55281DEST_PATH_IMAGE014
representing the bottom-up computing outputs, which are recorded as the last multiple computing outputs;
Figure 969010DEST_PATH_IMAGE015
Figure 696664DEST_PATH_IMAGE016
Figure 277818DEST_PATH_IMAGE017
representing a top-down computational output; conv () represents a convolution operation; resize () represents an upsample operation.
Figure 799935DEST_PATH_IMAGE018
The above formula, CIoU loss calculation formula, wherein
Figure 466540DEST_PATH_IMAGE019
Representing the size of the intersection area of the detection frame and the label frame;
Figure 997884DEST_PATH_IMAGE020
the area size of the phase of the detection frame and the label frame is shown,
Figure 433545DEST_PATH_IMAGE021
expressing an Euclidean distance calculation formula;
Figure 536017DEST_PATH_IMAGE022
representing the position of the center point of the detection frame;
Figure 283393DEST_PATH_IMAGE023
c represents the position of the center point of the label frame, and is the diagonal length of the minimum external frame of the detection frame and the label frame; α represents a weight parameter;
Figure 493795DEST_PATH_IMAGE024
indicating the width of the label frame;
Figure 33230DEST_PATH_IMAGE025
representing the width of the detection frame;
Figure 913461DEST_PATH_IMAGE026
indicating the label box height;
Figure 803925DEST_PATH_IMAGE027
indicating the detection box height.
Figure 427805DEST_PATH_IMAGE028
The above formula represents the DioUNMS calculation formula, wherein
Figure 90255DEST_PATH_IMAGE029
Representing the size of the intersection area of the detection frame and the label frame;
Figure 406967DEST_PATH_IMAGE020
the area size of the phase of the detection frame and the label frame is shown,
Figure 519148DEST_PATH_IMAGE021
expressing an Euclidean distance calculation formula;
Figure 681139DEST_PATH_IMAGE030
representing the position of the center point of the detection frame;
Figure 70532DEST_PATH_IMAGE023
and c is the diagonal length of the minimum external frame of the detection frame and the label frame.
Figure 541834DEST_PATH_IMAGE031
The above formula represents the current learning rate calculation formula, wherein
Figure 892043DEST_PATH_IMAGE032
Representing a current learning rate;
Figure 109923DEST_PATH_IMAGE033
represents a minimum selectable learning rate;
Figure 229188DEST_PATH_IMAGE034
represents the maximum selectable learning rate;
Figure 605812DEST_PATH_IMAGE035
representing the current round; t represents the total round.
The surgical instrument detection model firstly uses a YOLOv4 algorithm model to detect instruments appearing in a picture, and then performs mutual exclusion processing on the detected instruments, wherein the mutual exclusion processing comprises the following steps: only one of the straight separating pliers, the separating pliers and the Maryland pliers can be simultaneously formed; the clip applier and scissors cannot occur at the same time; no traumatic graspers and wave forceps can occur at the same time; the electric hook, clip applier, scissors, large grasping forceps, aspirator, electrocoagulation may occur once at a time. And finally, determining which instruments appear in the current picture through the processing.
Similarly, the anatomical structure deconstruction model firstly detects the anatomical structures appearing in the picture by using the YOLOv4 algorithm model, and then performs mutual exclusion processing on the detection results: only one of the same anatomical structures may be present at a time.
And the CVS scoring model is used for placing the pictures to be scored into an EfficientNet network and scoring the CVS completion degree.
As shown in FIG. 5, the EfficientNet network is the same as all modules of YOLOv4, but the structure is different. Wherein the loss function selects MSE loss in the EfficientNet network, and the activation functions are ReLU and Swish.
Figure 443318DEST_PATH_IMAGE036
The above equation represents the MSE loss calculation equation, where m represents all CVS scoring regions of the input;
Figure 461958DEST_PATH_IMAGE037
represents a CVS detection score;
Figure 701310DEST_PATH_IMAGE038
indicating the CVS label score.
Figure 248835DEST_PATH_IMAGE039
The above equation represents the formulation of the ReLU activation function, where x represents the input tensor,
Figure 839216DEST_PATH_IMAGE040
which represents the maximum value taken therein.
Figure 223117DEST_PATH_IMAGE041
The above equation represents the formula for the Swish activation function, where x represents the input data; β represents a constant or trainable parameter; e denotes a natural constant.
As shown in fig. 6, the surgery stage recognition model uses CNN-based ResNet for machine learning, and in order to improve the learning effect, a long-short term memory (LSTM) is also applied to the stage learning; the modules used by the ResNet network are the same as the YOLOv4 and the EfficientNet network, and the composition structure is different. The LSTM long term memory network processes data with timing information as follows.
Figure 51395DEST_PATH_IMAGE042
Figure 300980DEST_PATH_IMAGE044
Figure 847499DEST_PATH_IMAGE046
Figure 207942DEST_PATH_IMAGE047
Figure 546519DEST_PATH_IMAGE048
Figure 452159DEST_PATH_IMAGE049
The above formula represents the calculation formula for one cell in the LSTM network, where htRepresenting a hidden state at time t; c. CtRepresents the cellular calculation results of LSTM at time t; x is the number oftRepresenting input data at time t; h ist-1Representing a hidden state at time t-1; i.e. itRepresenting the calculation result after the input operation; f. oftRepresenting the calculation result after the forgotten operation; gtRepresenting the result of the output operation; otRepresenting an output of a temporal information fusion operation on the nucleus; wherein b isii、bif、big、bioAll are different offsets;
Figure 3750DEST_PATH_IMAGE050
calculated for the sigmiod function.
In the operation stage algorithm model, firstly, putting n continuous frames of pictures into a ResNet network for feature extraction to obtain a feature vector; next, a plurality of feature vectors are put into the LSTM, phase recognition of the last picture in the continuous pictures is performed, and then all phases up to now are subjected to phase post-processing.
Wherein the stage post-processing comprises:
denoising through a sliding window, and denoising according to the duration;
in the identification process between the first homogeneous stages, supplementing the part which is not at any stage into the adjacent stage, and then carrying out clamping denoising; and (4) performing clipping and approaching denoising, namely deleting the middle stage if the length of a certain middle stage is smaller than the percentage threshold of the lengths of the two adjacent stages at the same time.
Adjusting the abnormal position according to the expert opinion; the method specifically comprises the following steps: removing the gallbladder and cleaning the free gallbladder triangle and free gallbladder bed behind the operation area; deleting a surgical area before the free gallbladder triangle; removing the separation adhesion after the free gallbladder triangle is removed;
the part identified as being without a phase is replaced with the phase at the previous moment.
Another embodiment of the present invention relates to a system based on an intelligent assistance method for reducing celioscope cholecystectomy biliary tract injuries, characterized in that: it includes:
an image acquisition module configured to acquire a laparoscopic cholecystectomy video stream resulting in a sequence of video frames;
the device detection module is configured to utilize the CSPDarkNet network to extract features, obtain a feature map as the input of a subsequent detection network, and finally obtain the device types appearing in the corresponding video frames;
the organ detection module is configured to extract features by using a CSPDarkNet network, obtain a feature map as the input of a subsequent detection network, and finally obtain the organ position appearing in a corresponding video frame;
and the anatomical element region detection module is configured to utilize the CSPDarkNet network to extract features, obtain a feature map as the input of a subsequent detection network, and finally obtain the position of the anatomical element region appearing in the corresponding video frame.
The CVS scoring module is configured to extract image features by using a network formed by CNN convolution, the obtained feature map is used as the input of a subsequent full-connection network, and finally the score of the CVS in the corresponding video frame is obtained;
and the phase recognition module is configured to perform feature extraction on the N images by using the ResNet network, obtain a feature map as the input of a subsequent long-time and short-time memory network, and finally obtain the operation phase of the Nth image.
And the system output module outputs various information including the current surgical stage, the surgical instrument in use, the correct anatomical structure, the CVS scoring condition and the like through a display screen.
Each module in the system of the present invention is a content for implementing the corresponding method step in the method of the present invention, and specifically includes the corresponding operation step of the method of the present invention.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An intelligent auxiliary method for improving the safety of laparoscopic cholecystectomy is characterized in that: the intelligent assistance method comprises the following steps:
the method comprises the following steps of firstly, obtaining an artificial intelligence computer model for identifying surgical elements and a safe surgical model based on medical application through labeled data training, wherein the safe surgical model is established based on the surgical elements obtained by the artificial intelligence computer model;
inputting the images acquired in real time into an artificial intelligent computer model through a laparoscope for processing, and acquiring quantitative data of surgical elements;
inputting the operation elements into a safe operation model for analysis, and outputting guidance elements for reducing iatrogenic biliary tract injury, namely anatomical structure real-time markers and CVS real-time scores;
and fourthly, performing surgical operation according to the anatomical structure real-time mark and the CVS real-time scoring indication, and prompting in real time in the surgical process.
2. The intelligent assistance method of improving the safety of laparoscopic cholecystectomy according to claim 1, wherein: the surgical elements include: surgical instruments, anatomy, surgical stage, and CVS score; and establishing a safe operation model based on medical application based on the relation among different operation stages, operation instruments, anatomical structures and CVS scores and by combining the time length of each operation stage, the total time length and the prior operation experience.
3. An intelligent assistance method to improve the safety of laparoscopic cholecystectomy according to claim 1 or 2, characterized by: the construction step of the artificial intelligence computer model comprises the following steps:
collecting a laparoscopic surgery video and labeling the picture and the video to obtain a surgery labeling database;
and respectively constructing a surgical instrument detection model, an anatomical structure detection model, a CVS (composite surgical system) grading model and a surgical stage identification model through a YOLO v4 algorithm, an EfficientNet algorithm and ResNet + LSTM according to the surgical instrument labeling data, the anatomical structure labeling data, the CVS grading labeling data and the surgical stage labeling data in the surgical labeling database.
4. An intelligent assistance method to improve the safety of laparoscopic cholecystectomy according to claim 3, characterized by: the safe operation model construction step comprises the following steps:
identifying surgical elements according to the surgical instrument detection model, the anatomical structure detection model, the CVS scoring model and the surgical stage identification model obtained in the artificial intelligence computer model construction step;
and (3) constructing a safe operation model of the cholecystectomy anatomical structure, the use of a cholecystectomy instrument, the cholecystectomy stage relation, the cholecystectomy operation duration calculation and the CVS score numerical relation by combining expert experience, guidelines and treatises.
5. An intelligent assistance method to improve the safety of laparoscopic cholecystectomy according to claim 3, characterized by: the collecting the video of the laparoscopic surgery and labeling the picture and the video to obtain a surgery labeling database comprises:
a1, extracting the operation video by FFmpeg software according to a rule of 1 frame per second to obtain an operation picture library, and carrying out anatomical structure, CVS (composite video surveillance system) grading and manual labeling of an operation instrument on the extracted picture by Labelme software;
a2, transcoding the operation video according to a uniform format through FFmpeg to obtain an operation video library, and labeling each operation stage in the video;
a3, establishing a surgical annotation database according to the annotation data obtained in the step A1 and the step A2.
6. An intelligent assistance method to improve the safety of laparoscopic cholecystectomy according to claim 5 or 2, characterized by: the anatomical structure comprises a cystic duct, a cystic artery, a common bile duct, a cystic bed and a hepatic and biliary trigone area; the CVS scoring related areas comprise a hepatic and biliary trigone area, a lower third area of a gall bladder bed, a gall bladder duct and a gall bladder artery area; the surgical instrument comprises a stabbing card, a non-traumatic grasping forceps, an electric hook, a clip applier, an absorbable clip, an iron clip, a Maryland forceps, a gauze, a straight separating forceps, a scissors, a big grasping forceps, a separating forceps, a specimen bag, a puncture needle, a suction apparatus, an electric coagulation, a drainage tube and a wave forceps; the operation stage comprises a pneumoperitoneum establishing stage, an adhesion releasing stage, a liver and gall triangle separating stage, a free gall bladder bed stage, an operation area cleaning stage and a gall bladder taking stage.
7. An intelligent assistance method to improve the safety of laparoscopic cholecystectomy according to claim 6, characterized by: the method for constructing the surgical instrument detection model, the anatomical structure detection model, the CVS scoring model and the surgical stage identification model respectively through the YOLO v4 algorithm and the ResNet + LSTM according to the surgical instrument annotation data, the anatomical structure annotation data, the CVS scoring annotation data and the surgical stage annotation data in the surgical annotation database comprises the following steps:
constructing a surgical instrument detection model and an anatomical structure detection model according to the surgical instrument labeling data, the anatomical structure labeling data and the CVS scoring labeling data through a YOLO v4 algorithm;
constructing a CVS scoring model through EffcientNet according to the CVS scoring marking data;
and constructing a surgical stage identification model through ResNet + LSTM according to the surgical stage marking data.
8. An intelligent assistance method to improve the safety of laparoscopic cholecystectomy according to claim 3 or 4 or 7, characterized by: the detection and identification step of the surgical instrument detection model comprises the following steps: detecting various instruments appearing in the picture, and performing mutual exclusion processing on the detected instruments; the mutual exclusion treatment comprises only one of straight separating pliers, separating pliers and Maryland pliers which are possible to appear simultaneously; the clip applier and scissors cannot occur at the same time; no traumatic graspers and wave forceps can occur at the same time; the electric hook, the clip applier, the scissors, the large grasping forceps, the suction apparatus and the electric coagulation can only occur once at the same time;
the step of detecting and identifying the anatomical structure detection model comprises the following steps: detecting various anatomical structures appearing in the picture, and performing exclusive processing on the detection results, wherein the same anatomical structure is possible to appear in one anatomical structure at the same time;
the grading step of the CVS grading model comprises the following steps: and putting the pictures to be scored into an EfficientNet network, and scoring the CVS completion degree.
9. An intelligent assistance method to improve the safety of laparoscopic cholecystectomy according to claim 3 or 4 or 7, characterized by: the detection and identification of the operation stage identification model comprises the following steps:
putting n frames of continuous pictures into a ResNet network for feature extraction to obtain feature vectors;
and putting a plurality of feature vectors into an LSTM network, performing stage identification on the last picture in the continuous pictures, and performing stage post-processing on all the stages so far.
10. An intelligent assistance method to improve the safety of laparoscopic cholecystectomy according to claim 9, characterized by: the stage post-processing comprises the following steps:
denoising through a sliding window, and denoising according to the duration;
in the identification process between the first homogeneous stages, supplementing the part which is not at any stage into the adjacent stage, and then carrying out clamping denoising;
adjusting the abnormal position according to the expert opinion;
the part identified as being without a phase is replaced with the phase at the previous moment.
11. A system based on an intelligent auxiliary method for reducing celioscope cholecystectomy biliary tract injury is characterized in that: the system comprises:
an image acquisition module configured to acquire a laparoscopic cholecystectomy video stream resulting in a sequence of video frames;
the device detection module is configured to utilize the CSPDarkNet network to extract features, obtain a feature map as the input of a subsequent detection network, and finally obtain the device types appearing in the corresponding video frames;
the organ detection module is configured to extract features by using a CSPDarkNet network, obtain a feature map as the input of a subsequent detection network, and finally obtain the organ position appearing in a corresponding video frame;
the anatomical element region detection module is configured to utilize a CSPDarkNet network to extract features, obtain a feature map as the input of a subsequent detection network, and finally obtain the position of an anatomical element region appearing in a corresponding video frame;
the CVS scoring module is configured to extract image features by using a network formed by CNN convolution, the obtained feature map is used as the input of a subsequent full-connection network, and finally the score of the CVS in the corresponding video frame is obtained;
the stage identification module is configured to perform feature extraction on the N images by using a ResNet network, obtain a feature map as the input of a subsequent long-time and short-time memory network, and finally obtain an operation stage in which the Nth image is positioned;
and the system output module outputs various information including the current surgical stage, the surgical instrument in use, the correct anatomical structure and the CVS scoring condition through a display screen.
12. The assistance system according to claim 11, characterized in that: the system assists the surgical procedure with an intelligent assistance method of improving the safety of laparoscopic cholecystectomy according to any one of claims 1 to 10.
CN202110228978.3A 2021-03-02 2021-03-02 Intelligent auxiliary system for improving safety of laparoscopic cholecystectomy Active CN112932663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110228978.3A CN112932663B (en) 2021-03-02 2021-03-02 Intelligent auxiliary system for improving safety of laparoscopic cholecystectomy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110228978.3A CN112932663B (en) 2021-03-02 2021-03-02 Intelligent auxiliary system for improving safety of laparoscopic cholecystectomy

Publications (2)

Publication Number Publication Date
CN112932663A true CN112932663A (en) 2021-06-11
CN112932663B CN112932663B (en) 2021-10-22

Family

ID=76247118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110228978.3A Active CN112932663B (en) 2021-03-02 2021-03-02 Intelligent auxiliary system for improving safety of laparoscopic cholecystectomy

Country Status (1)

Country Link
CN (1) CN112932663B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742527A (en) * 2021-11-08 2021-12-03 成都与睿创新科技有限公司 Method and system for retrieving and extracting operation video clips based on artificial intelligence
CN113813053A (en) * 2021-09-18 2021-12-21 长春理工大学 Operation process analysis method based on laparoscope endoscopic image
CN114005022A (en) * 2021-12-30 2022-02-01 四川大学华西医院 Dynamic prediction method and system for surgical instrument
CN114145844A (en) * 2022-02-10 2022-03-08 北京数智元宇人工智能科技有限公司 Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm
CN114299072A (en) * 2022-03-11 2022-04-08 四川大学华西医院 Artificial intelligence-based anatomy variation identification prompting method and system
CN114601560A (en) * 2022-05-11 2022-06-10 中国科学院深圳先进技术研究院 Minimally invasive surgery assisting method, device, equipment and storage medium
CN114678117A (en) * 2022-05-26 2022-06-28 成都与睿创新科技有限公司 Management method and device for standardizing operating behaviors of operating room personnel
CN114724682A (en) * 2022-06-08 2022-07-08 成都与睿创新科技有限公司 Auxiliary decision-making method and device for minimally invasive surgery
WO2023017651A1 (en) * 2021-08-13 2023-02-16 ソニーグループ株式会社 Medical observation system, information processing device, and information processing method
CN116269749A (en) * 2023-03-06 2023-06-23 东莞市东部中心医院 Laparoscopic bladder cancer surgical system with improved reserved nerves
CN116385497A (en) * 2023-05-29 2023-07-04 成都与睿创新科技有限公司 Custom target tracking method and system for body cavity
CN116863469A (en) * 2023-06-27 2023-10-10 首都医科大学附属北京潞河医院 Deep learning-based surgical anatomy part identification labeling method
WO2023193238A1 (en) * 2022-04-08 2023-10-12 中国科学院深圳先进技术研究院 Surgical instrument, behavior and target tissue joint identification method and apparatus
CN116919593A (en) * 2023-08-04 2023-10-24 溧阳市中医医院 Gallbladder extractor for cholecystectomy
CN117224231A (en) * 2023-11-16 2023-12-15 四川大学华西医院 Vascular exposure analysis device for hepatectomy dissection
CN116863469B (en) * 2023-06-27 2024-05-14 首都医科大学附属北京潞河医院 Deep learning-based surgical anatomy part identification labeling method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103815972A (en) * 2014-02-26 2014-05-28 上海齐正微电子有限公司 Automatic tracking system for operative region of laparothoracoscope and method
WO2016185259A1 (en) * 2015-05-20 2016-11-24 Siemens Healthcare Gmbh Method and apparatus to provide updated patient images during robotic surgery
CN107789058A (en) * 2017-10-23 2018-03-13 南方医科大学南方医院 For stomach cancer endoscope-assistant surgery real-time navigation system based on the automatic identification method of condition random field arteriovenous title
CN108492693A (en) * 2018-03-28 2018-09-04 盛玉涛 A kind of laparoscopic surgery simulated training system shown based on computer aided medicine
CN109717833A (en) * 2018-11-26 2019-05-07 中国科学院软件研究所 A kind of neurological disease assistant diagnosis system based on human motion posture
US20200273560A1 (en) * 2019-02-21 2020-08-27 Theator inc. Surgical image analysis to determine insurance reimbursement
CN111931626A (en) * 2020-08-03 2020-11-13 天津理工大学 Automatic operation method of vascular intervention robot based on deep learning
CN112220562A (en) * 2019-07-15 2021-01-15 数字外科有限公司 Method and system for enhancing surgical tool control during surgery using computer vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103815972A (en) * 2014-02-26 2014-05-28 上海齐正微电子有限公司 Automatic tracking system for operative region of laparothoracoscope and method
WO2016185259A1 (en) * 2015-05-20 2016-11-24 Siemens Healthcare Gmbh Method and apparatus to provide updated patient images during robotic surgery
CN107789058A (en) * 2017-10-23 2018-03-13 南方医科大学南方医院 For stomach cancer endoscope-assistant surgery real-time navigation system based on the automatic identification method of condition random field arteriovenous title
CN108492693A (en) * 2018-03-28 2018-09-04 盛玉涛 A kind of laparoscopic surgery simulated training system shown based on computer aided medicine
CN109717833A (en) * 2018-11-26 2019-05-07 中国科学院软件研究所 A kind of neurological disease assistant diagnosis system based on human motion posture
US20200273560A1 (en) * 2019-02-21 2020-08-27 Theator inc. Surgical image analysis to determine insurance reimbursement
CN112220562A (en) * 2019-07-15 2021-01-15 数字外科有限公司 Method and system for enhancing surgical tool control during surgery using computer vision
CN111931626A (en) * 2020-08-03 2020-11-13 天津理工大学 Automatic operation method of vascular intervention robot based on deep learning

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023017651A1 (en) * 2021-08-13 2023-02-16 ソニーグループ株式会社 Medical observation system, information processing device, and information processing method
CN113813053A (en) * 2021-09-18 2021-12-21 长春理工大学 Operation process analysis method based on laparoscope endoscopic image
CN113742527A (en) * 2021-11-08 2021-12-03 成都与睿创新科技有限公司 Method and system for retrieving and extracting operation video clips based on artificial intelligence
CN114005022A (en) * 2021-12-30 2022-02-01 四川大学华西医院 Dynamic prediction method and system for surgical instrument
CN114005022B (en) * 2021-12-30 2022-03-25 四川大学华西医院 Dynamic prediction method and system for surgical instrument
CN114145844A (en) * 2022-02-10 2022-03-08 北京数智元宇人工智能科技有限公司 Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm
CN114299072A (en) * 2022-03-11 2022-04-08 四川大学华西医院 Artificial intelligence-based anatomy variation identification prompting method and system
WO2023193238A1 (en) * 2022-04-08 2023-10-12 中国科学院深圳先进技术研究院 Surgical instrument, behavior and target tissue joint identification method and apparatus
CN114601560B (en) * 2022-05-11 2022-08-19 中国科学院深圳先进技术研究院 Minimally invasive surgery assisting method, device, equipment and storage medium
CN114601560A (en) * 2022-05-11 2022-06-10 中国科学院深圳先进技术研究院 Minimally invasive surgery assisting method, device, equipment and storage medium
CN114678117B (en) * 2022-05-26 2022-08-02 成都与睿创新科技有限公司 Management method and device for standardizing operating behaviors of operating room personnel
CN114678117A (en) * 2022-05-26 2022-06-28 成都与睿创新科技有限公司 Management method and device for standardizing operating behaviors of operating room personnel
CN114724682A (en) * 2022-06-08 2022-07-08 成都与睿创新科技有限公司 Auxiliary decision-making method and device for minimally invasive surgery
CN114724682B (en) * 2022-06-08 2022-08-16 成都与睿创新科技有限公司 Auxiliary decision-making device for minimally invasive surgery
CN116269749B (en) * 2023-03-06 2023-10-10 东莞市东部中心医院 Laparoscopic bladder cancer surgical system with improved reserved nerves
CN116269749A (en) * 2023-03-06 2023-06-23 东莞市东部中心医院 Laparoscopic bladder cancer surgical system with improved reserved nerves
CN116385497B (en) * 2023-05-29 2023-08-22 成都与睿创新科技有限公司 Custom target tracking method and system for body cavity
CN116385497A (en) * 2023-05-29 2023-07-04 成都与睿创新科技有限公司 Custom target tracking method and system for body cavity
CN116863469A (en) * 2023-06-27 2023-10-10 首都医科大学附属北京潞河医院 Deep learning-based surgical anatomy part identification labeling method
CN116863469B (en) * 2023-06-27 2024-05-14 首都医科大学附属北京潞河医院 Deep learning-based surgical anatomy part identification labeling method
CN116919593A (en) * 2023-08-04 2023-10-24 溧阳市中医医院 Gallbladder extractor for cholecystectomy
CN116919593B (en) * 2023-08-04 2024-02-06 溧阳市中医医院 Gallbladder extractor for cholecystectomy
CN117224231A (en) * 2023-11-16 2023-12-15 四川大学华西医院 Vascular exposure analysis device for hepatectomy dissection
CN117224231B (en) * 2023-11-16 2024-02-13 四川大学华西医院 Vascular exposure analysis device for hepatectomy dissection

Also Published As

Publication number Publication date
CN112932663B (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN112932663B (en) Intelligent auxiliary system for improving safety of laparoscopic cholecystectomy
WO2020207377A1 (en) Method, device, and system for image recognition model training and image recognition
Sun et al. Colorectal polyp segmentation by U-Net with dilation convolution
WO2021093448A1 (en) Image processing method and apparatus, server, medical image processing device and storage medium
CN114299072B (en) Artificial intelligence-based anatomy variation identification prompting method and system
CN111932561A (en) Real-time enteroscopy image segmentation method and device based on integrated knowledge distillation
CN108830835A (en) It identifies the method for spinal sagittal bit image exception and calculates equipment
CN114724682B (en) Auxiliary decision-making device for minimally invasive surgery
KR20190105180A (en) Apparatus for Lesion Diagnosis Based on Convolutional Neural Network and Method thereof
CN112613517B (en) Endoscopic instrument segmentation method, endoscopic instrument segmentation apparatus, computer device, and storage medium
CN111652175A (en) Real-time surgical tool detection method applied to robot-assisted surgical video analysis
Maiora et al. Abdominal CTA image analisys through active learning and decision random forests: Aplication to AAA segmentation
CN114005022B (en) Dynamic prediction method and system for surgical instrument
Chen et al. AI-PLAX: AI-based placental assessment and examination using photos
Mondal et al. Multitask learning of temporal connectionism in convolutional networks using a joint distribution loss function to simultaneously identify tools and phase in surgical videos
Chu et al. Multi-level feature aggregation network for instrument identification of endoscopic images
Zhang et al. TUnet-LBF: Retinal fundus image fine segmentation model based on transformer Unet network and LBF
Jangade et al. Study on Deep Learning Models for Human Pose Estimation and its Real Time Application
Öztürk et al. A novel polyp segmentation approach using U-net with saliency-like feature fusion
CN116703837A (en) MRI image-based rotator cuff injury intelligent identification method and device
Jaafari et al. The impact of ensemble learning on surgical tools classification during laparoscopic cholecystectomy
Liu et al. Towards surgical tools detection and operative skill assessment based on deep learning
CN113344911B (en) Method and device for measuring size of calculus
CN113469942B (en) CT image lesion detection method
Mei et al. Detection of suture needle using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant