CN114565881B - Method and system for distinguishing different scenes inside and outside body cavity - Google Patents

Method and system for distinguishing different scenes inside and outside body cavity Download PDF

Info

Publication number
CN114565881B
CN114565881B CN202210455912.2A CN202210455912A CN114565881B CN 114565881 B CN114565881 B CN 114565881B CN 202210455912 A CN202210455912 A CN 202210455912A CN 114565881 B CN114565881 B CN 114565881B
Authority
CN
China
Prior art keywords
body cavity
outside
classification
video
identifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210455912.2A
Other languages
Chinese (zh)
Other versions
CN114565881A (en
Inventor
刘杰
刘润文
王玉贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Yurui Innovation Technology Co ltd
Original Assignee
Chengdu Yurui Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Yurui Innovation Technology Co ltd filed Critical Chengdu Yurui Innovation Technology Co ltd
Priority to CN202210455912.2A priority Critical patent/CN114565881B/en
Publication of CN114565881A publication Critical patent/CN114565881A/en
Application granted granted Critical
Publication of CN114565881B publication Critical patent/CN114565881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes

Abstract

The invention relates to a method and a system for distinguishing different scenes inside and outside a body cavity, which relate to the technical field of artificial intelligence and comprise the following steps: s1, constructing a database of classification models for training and identifying the internal and external visual fields of the body cavity and constructing a classification model for identifying the internal and external visual fields of the body cavity; s2, accessing and analyzing different operation scenes in the operation video picture through the in-vivo and in-vitro visual field identification model to obtain classification data of the operation scenes of different types, and outputting a classification data result; s3, counting the state of the operation lens inside and outside the body cavity, the times of lens placing or moving out of the body cavity in the operation process, the time point and time length of classification change and the proportion of the total operation time length occupied. The state outside the body cavity and the time length of the state are distinguished from the invalid time length, so that non-technical factors influencing the operation efficiency in operation quality control are favorably distinguished; the accuracy of distinguishing different scenes of the operation video is further improved, so that the operation video reaches the level of clinical availability.

Description

Method and system for distinguishing different scenes inside and outside body cavity
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a system for distinguishing different scenes inside and outside a body cavity.
Background
The research of artificial intelligence in the endoscopic minimally invasive surgery is increasing day by day, and the related research is available at present from the identification of instruments to the evaluation of stages, anatomical regions, surgical actions and anatomical treatment and the preliminary prediction of the surgical duration; however, few people are involved in the detection of scenes inside and outside the body cavity, and meanwhile, in the traditional sense, different scenes inside and outside the body cavity in the endoscopic minimally invasive surgery and video clips without anatomy, organs and surgical actions belong to the range of invalid duration; however, unlike the non-motion segment, the extra-body-cavity scene is not only clearly different from the intra-body cavity scene, but also different from the intra-body-cavity non-motion segment in clinical practice, and the extra-long and excessively frequent extra-body-cavity scenes respectively mean that the arrangement of the operation team is not reasonable and the antifogging effect of the operation lens is not good enough, but not necessarily technically pure enough.
For different scenes inside and outside the body cavity in the endoscopic minimally invasive surgery, as the state and the frequency of switching the scenes inside and outside the body cavity in the surgery process are one of the standards for evaluating the smoothness, skill proficiency, distribution and effectiveness of surgery segments and have correlation with the implementation difficulty of the surgery per se to a certain extent, the detection of different states inside and outside the body cavity provides guarantee for the automatic implementation of the functions, and meanwhile, the study of the scenes inside and outside the body cavity and the invalid stages inside and outside the body cavity by using the traditional computer vision algorithm not only consumes a large amount of data sets, but also has limited accuracy improvement and cannot meet the requirements of clinical work.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a method and a system for distinguishing different scenes inside and outside a body cavity, fills the gap of artificial intelligence in the application of endoscopic minimally invasive surgery, and improves the accuracy of distinguishing different scenes inside and outside the body cavity.
The purpose of the invention is realized by the following technical scheme: a method of distinguishing different scenes inside and outside a body cavity, the method comprising:
s1, constructing a database of classification models for training and identifying the internal and external visual fields of the body cavity and constructing a classification model for identifying the internal and external visual fields of the body cavity;
s2, accessing and analyzing different operation scenes in the operation video picture through the in-vivo and in-vitro visual field identification model to obtain classification data of the operation scenes of different types, and outputting a classification data result;
s3, counting the state of the operation lens inside and outside the body cavity, the times of lens placing or moving out of the body cavity in the operation process, the time point and time length of classification change and the proportion of the total operation time length occupied.
The constructing a database of classification models for training recognition of internal and external views of a body cavity comprises:
b1, transcoding all the operation videos into a unified format through video transcoding software;
and B2, labeling time periods occurring inside and outside the body cavity in the video through labeling software, checking preliminarily labeled video data according to clinical experience, and modifying the preliminarily labeled unqualified content to obtain the qualified labeled video.
The constructing of the classification model for identifying the internal and external visual fields of the body cavity comprises the following steps:
a1, accessing the video picture through an access path, adjusting the size of the picture, and calculating three channel color histograms of RGB of the picture;
a2, taking the pixel values in the histogram as the index of the one-dimensional vector, taking the number of the corresponding pixel values as the value of the vector index, and obtaining the color histogram vector of each channel;
a3, converting the RGB three-channel histogram into a one-dimensional vector: splicing the RGB channels into one-dimensional vectors with the length of 255 multiplied by 3 according to the sequence of the RGB channels;
a4, inputting the one-dimensional vector into a three-layer fully-connected classification model, wherein each layer of the classification model comprises a linear calculation layer, a ReLU activation function and a batch normalization layer, and obtaining the classification result inside and outside the body cavity of the video picture according to the calculation result of the classification model;
a5, calculating loss through a mean square error formula according to the difference between the classification result and the actual result, and further optimizing and identifying a classification model of the internal and external visual fields of the body cavity;
a6, repeating the steps A1-A5 until the loss is not reduced any more, and finally constructing a classification model for identifying the internal and external visual fields of the body cavity.
The different operation scenes in the operation video pictures are accessed and analyzed through the in-vivo and in-vitro visual field recognition model, and the obtained classification data of the different types of operation scenes comprises the following steps:
capturing a current operation scene in real time and inputting captured operation video picture information into a classification model for identifying the internal and external visual fields of a body cavity;
and repeating the steps A1-A4 to analyze the video pictures, identifying classification models of the internal and external visual fields of the body cavity, modifying the external data of the body cavity, which only appears for 1 second, into the internal data of the body cavity, and removing noise through a sliding window with the window size of 5 seconds.
A system for distinguishing different scenes inside and outside a body cavity comprises a construction module, a surgical video picture acquisition module, different scene analysis modules inside and outside the body cavity, a surgical video editing and splicing module, a video segment output module and a statistical analysis module;
the construction module is used for constructing a database of classification models for training and identifying the internal and external visual fields of the body cavity and constructing a classification model for identifying the internal and external visual fields of the body cavity;
the operation video image acquisition module is used for acquiring continuous images under the field of view of the lens in real time in the operation process and transmitting the continuous images to different scene analysis modules inside and outside the body cavity;
the body cavity inside and outside different scene analysis module is used for accessing and analyzing different operation scenes in the operation video picture through a classification model for identifying the body cavity inside and outside visual fields to obtain classification data of different types of operation scenes;
the operation video editing and splicing module is used for deleting the video picture sequence outside the body cavity and splicing other video segments according to the time sequence according to the identification and analysis results of the different scene analysis modules inside and outside the body cavity for the operation without the operation outside the body cavity; editing and splicing the switching content of the operation visual field of the operation with the operation outside the body cavity according to the identification and analysis results of the analysis modules of different scenes inside and outside the body cavity, so as to obtain a more continuous and smooth operation process;
the video clip output module is used for outputting a video file consisting of spliced video picture sequences;
the statistical analysis module is used for counting the states of the operation lens inside and outside the body cavity, the times of placing or removing the lens out of the body cavity in the operation process, the time points and the time length of the classification change and the proportion of the total operation time length occupied by the lens.
The different scene analysis module inside and outside the body cavity is used for visiting and analyzing different operation scenes in the operation video picture through the classification model of discerning the inside and outside field of vision of body cavity, obtains the classification data of different types of operation scenes, specifically includes:
capturing a current operation scene in real time, inputting captured operation video picture information into a classification model for identifying the visual field inside and outside a body cavity, adjusting the size of a picture, and calculating three channel color histograms of RGB (red, green and blue) of the picture;
taking the pixel values in the histogram as the index of the one-dimensional vector, and taking the number of the corresponding pixel values as the value of the vector index to obtain the color histogram vector of each channel;
converting the RGB three-channel histogram into a one-dimensional vector: splicing the RGB channels into one-dimensional vectors with the length of 255 multiplied by 3 according to the sequence of the RGB channels;
inputting the one-dimensional vector into a three-layer fully-connected classification model, wherein each layer of the classification model comprises a linear calculation layer, a ReLU activation function and a batch normalization layer, and obtaining the intra-cavity and intra-cavity classification results of the picture according to the calculation results of the classification model;
identifying classification models of internal and external visual fields of the body cavity: the extra-body cavity data appearing only for 1 second is modified into intra-body cavity data, and noise is removed through a sliding window with a window size of 5 seconds.
The invention has the following advantages: a method and a system for distinguishing different scenes inside and outside a body cavity fill the blank of the application of the current artificial intelligence in the endoscopic surgery by automatically distinguishing the scenes inside and outside the body cavity and counting the distribution and the time length proportion of the scenes inside and outside the body cavity. Meanwhile, the state outside the body cavity and the time length of the state are distinguished from the 'invalid time length', so that non-technical factors influencing the operation efficiency, such as equipment, team arrangement and the like, can be distinguished in the operation quality control; the accuracy of distinguishing different scenes of the operation video is further improved to reach the clinically usable level, and technical inspiration is provided for further improving the content analysis accuracy of the operation video. Meanwhile, the technology provides more comprehensive and quantized data for the fluency of the operation and the skill proficiency of the main knife and the assistant, indirectly reflects the difficulty of the operation, and plays a powerful auxiliary role in predicting the operation time and higher functions of the operation time.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments of the present application provided below in connection with the appended drawings is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. The invention is further described below with reference to the accompanying drawings.
As shown in FIG. 1, one embodiment of the present invention relates to a method and system for distinguishing different scenes inside and outside a body cavity, comprising:
s1, constructing a database of classification models for training and identifying the internal and external visual fields of the body cavity and constructing a classification model for identifying the internal and external visual fields of the body cavity;
s2, accessing and analyzing different operation scenes in the operation video picture through the in-vivo and in-vitro visual field identification model to obtain classification data of the operation scenes of different types, and outputting a classification data result;
further, reading all image data of a certain operation at a frame rate of 1fps, analyzing all images by using a classification model for identifying the internal and external visual fields of a body cavity, arranging all data of the same operation according to a time sequence, and processing and smoothly denoising the arranged result to obtain a final result;
s3, counting the state of the operation lens inside and outside the body cavity, the times of lens placing or moving out of the body cavity in the operation process, the time point and time length of classification change and the proportion of the total operation time length occupied.
Further, constructing a database for training a classification model for identifying internal and external fields of view of a body cavity includes:
b1, transcoding all the operation videos into a uniform format through video transcoding software; e.g., mpeg-4 format;
and B2, marking time periods appearing inside and outside the body cavity in the video through standard software, auditing the preliminarily marked video data according to clinical experience, consensus and the like, and modifying the content with unqualified preliminary marking to obtain a picture with qualified marking.
Further, constructing a classification model that identifies the internal and external fields of view of the body cavity includes:
a1, accessing video pictures through an access path, adjusting the picture size to keep the aspect ratio, enabling the longest edge not to exceed 1000 pixels, and enabling the picture size to be equal to or less than 1000 pixels through the formula Y = [ x ]0,x1,x2,...,x255]Calculating color histograms of three channels of RGB of the picture; where Y is a single channel color histogram vector, xiThe number of times the i color value appears on a certain channel.
A2, taking the pixel values in the histogram as the index of the one-dimensional vector, taking the number of the corresponding pixel values as the value of the vector index, and obtaining the color histogram vector of each channel;
a3, converting the RGB three-channel histogram into a one-dimensional vector: splicing the RGB channels into one-dimensional vectors with the length of 255 multiplied by 3 according to the sequence of the RGB channels;
a4, inputting the one-dimensional vector into a three-layer fully-connected classification model, wherein each layer of the classification model comprises a linear calculation layer, a ReLU activation function and a batch normalization layer, and obtaining the internal and external classification results of the picture body cavity according to the calculation results of the classification model
Figure 611767DEST_PATH_IMAGE002
The above formula of convolution calculation, wherein y represents the convolution calculation output, n represents the number of neurons, wiWeight, x, of the ith neuroniRepresenting the input data of the ith neuron, and b adding an offset to the calculation result.
Where ReLU = max (0, x), representing the formula of the ReLU activation function, where x represents the input tensor and max () represents the maximum value taken therein.
Figure 201011DEST_PATH_IMAGE004
Computing a formula for a batch normalization layer, where BN represents a batch normalization computation output, x represents input data, E [ x ]]Mean value, Var [ x ], representing the x tensor]Representing the variance of the x tensor, ξ represents a very small parameter to ensure that the denominator is not 0, γ and β are learnable coefficients.
A5, calculating loss through a mean square error formula according to the difference between the classification result and the actual result, and further optimizing and identifying a classification model of the internal and external visual fields of the body cavity;
wherein the mean square error formula is
Figure 184011DEST_PATH_IMAGE006
Expressing a mean square error loss calculation formula, wherein m represents the number of input pictures;
Figure 251324DEST_PATH_IMAGE008
representing the classification result of the ith picture;
Figure 737800DEST_PATH_IMAGE010
the ith graph label is shown.
A6, repeating the steps A1-A5 until the loss is not reduced any more, and finally constructing a classification model for identifying the internal and external visual fields of the body cavity.
Further, accessing and analyzing different operation scenes in the operation video picture through the in-vivo and in-vitro visual field identification model, and obtaining classification data of the different types of operation scenes comprises the following steps:
capturing a current surgical scene in real time, and inputting captured surgical video picture information into a classification model for identifying the internal and external visual fields of a body cavity in a USB (universal serial bus) or wireless communication manner and the like;
and repeating the steps A1-A4 to analyze the video pictures, identifying classification models of the internal and external visual fields of the body cavity, modifying the external data of the body cavity, which only appears for 1 second, into the internal data of the body cavity, and removing noise through a sliding window with the window size of 5 seconds.
Further, the accessing and analyzing of the surgical video includes: the specific minimally invasive surgery video or video set is accessed and analyzed through storage devices such as a built-in memory, a memory card, a hard disk, a database, a network disk and the like.
Further, the classification model for identifying the internal and external visual fields of the body cavity outputs and counts the states of the operation lens inside and outside the body cavity to a display screen, a storage database and statistical software respectively, and the proportion of the times and duration of placing or moving the lens out of the body cavity in the operation process to the total operation duration, which comprises the following steps:
json, stored in local or established paths, including but not limited to, time and date of surgery, hospital and department in which surgery is located, name of surgery, doctor of primary staff, name of patient undergoing surgery, basic information of patient, length of surgery, start and stop time of lens outside body cavity, and number of times lens exits outside body cavity;
and screening a series of videos with the same characteristics in a specified database path according to the stored contents, such as a specified operation place hospital and department, an operation name, a doctor of a primary doctor and the like, and analyzing the contents of the average body cavity outside duration, the average lens body cavity moving-out times and the like of the screened video group and a statistical chart thereof through statistical software.
The invention relates to a system for distinguishing different scenes inside and outside a body cavity, which comprises a construction module, a surgical video picture acquisition module, a different scene analysis module inside and outside the body cavity, a surgical video clip splicing module, a video clip output module and a statistical analysis module;
the construction module is used for constructing a database for training and identifying classification models of internal and external visual fields of a body cavity and constructing a classification model for identifying the internal and external visual fields of the body cavity;
the operation video image acquisition module is used for acquiring continuous images under the field of view of the lens in real time in the operation process and transmitting the continuous images to different scene analysis modules inside and outside the body cavity;
the body cavity inside and outside different scene analysis module is used for accessing and analyzing different operation scenes in the operation video picture through a classification model for identifying the body cavity inside and outside visual fields to obtain classification data of different types of operation scenes;
the operation video editing and splicing module is used for deleting the video picture sequence outside the body cavity and splicing other video segments according to the time sequence according to the identification and analysis results of the different scene analysis modules inside and outside the body cavity for the operation without the operation outside the body cavity; for operations with operations outside a body cavity, such as knee joint cruciate ligament repair, hand-assisted laparoscopic splenectomy and the like, cutting and splicing the switching contents of the operation vision field according to the identification and analysis results of different scene analysis modules inside and outside the body cavity to obtain a more continuous and smooth operation flow;
the video clip output module is used for outputting a video file consisting of spliced video picture sequences;
the statistical analysis module is used for counting the states of the operation lens inside and outside the body cavity, the times of placing or removing the lens out of the body cavity in the operation process, the time points and the time length of the classification change and the proportion of the total operation time length occupied by the lens.
The system also comprises a data storage module used for storing the analysis results of the different scene analysis modules inside and outside the body cavity.
Further, different scene analysis module inside and outside the body cavity is used for visiting and analyzing different operation scenes in the operation video picture through the classification model of discerning the inside and outside field of vision of body cavity, obtains the classification data of different types of operation scenes, specifically includes:
capturing a current operation scene in real time, inputting captured operation video picture information into a classification model for identifying the visual field inside and outside a body cavity, adjusting the size of a picture, and calculating three channel color histograms of RGB (red, green and blue) of the picture;
taking the pixel values in the histogram as the index of the one-dimensional vector, and taking the number of the corresponding pixel values as the value of the vector index to obtain the color histogram vector of each channel;
converting the RGB three-channel histogram into a one-dimensional vector: splicing the RGB channels into one-dimensional vectors with the length of 255 multiplied by 3 according to the sequence of the RGB channels;
inputting the one-dimensional vector into a three-layer fully-connected classification model, wherein each layer of the classification model comprises a linear calculation layer, a ReLU activation function and a batch normalization layer, and obtaining the internal and external classification results of the body cavity of the picture according to the calculation results of the classification model;
and identifying a classification model of the internal and external visual fields of the body cavity, modifying the external data of the body cavity, which only appears for 1 second, into the internal data of the body cavity, and removing noise through a sliding window with the window size of 5 seconds.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. A method for distinguishing different scenes inside and outside a body cavity is characterized in that: the method comprises the following steps:
s1, constructing a database of classification models for training and identifying the internal and external visual fields of the body cavity and constructing a classification model for identifying the internal and external visual fields of the body cavity;
s2, accessing and analyzing different operation scenes in the operation video picture through the in-vivo and in-vitro visual field identification model to obtain classification data of the different operation scenes, and outputting a classification data result;
s3, counting the state of the operation lens inside and outside the body cavity, the times of placing or removing the lens from the body cavity in the operation process, the time point and the time length of the classification change and the proportion of the total operation time length occupied by the lens;
wherein, the constructing of the classification model for identifying the internal and external visual fields of the body cavity comprises the following steps:
a1, accessing the video picture through an access path, adjusting the size of the picture, and calculating three channel color histograms of RGB of the picture;
a2, taking the pixel values in the histogram as the index of the one-dimensional vector, taking the number of the corresponding pixel values as the value of the vector index, and obtaining the color histogram vector of each channel;
a3, converting the RGB three-channel histogram into a one-dimensional vector: splicing the RGB channels into one-dimensional vectors with the length of 255 multiplied by 3 according to the sequence of the RGB channels;
a4, inputting the one-dimensional vector into a three-layer fully-connected classification model, wherein each layer of the classification model comprises a linear calculation layer, a ReLU activation function and a batch normalization layer, and obtaining the in-vivo and in-vitro classification result of the video picture according to the calculation result of the classification model;
a5, calculating loss through a mean square error formula according to the difference between the classification result and the actual result, and further optimizing and identifying a classification model of the internal and external visual fields of the body cavity;
a6, repeating the steps A1-A5 until the loss is not reduced any more, and finally constructing a classification model for identifying the internal and external visual fields of the body cavity.
2. A method according to claim 1, wherein said method comprises the steps of: the constructing a database of classification models for training recognition of internal and external views of a body cavity comprises:
b1, transcoding all the operation videos into a unified format through video transcoding software;
and B2, labeling time periods appearing inside and outside the body cavity in the video through standard software, checking preliminarily labeled video data according to clinical experience, and modifying the preliminarily labeled unqualified content to obtain a qualified labeled picture.
3. A method according to claim 1, wherein said method comprises the steps of: different operation scenes in the operation video pictures are accessed and analyzed through the in-vivo and in-vitro visual field recognition model, and classification data of different types of operation scenes are obtained, and the method comprises the following steps:
capturing a current operation scene in real time and inputting captured operation video picture information into a classification model for identifying the internal and external visual fields of a body cavity;
repeating the steps A1-A4 to analyze the video pictures, and identifying classification models of the internal and external fields of vision of the body cavity: the extra-body cavity data appearing only for 1 second is modified into intra-body cavity data, and noise is removed through a sliding window with a window size of 5 seconds.
4. A system for distinguishing different scenes inside and outside a body cavity, comprising: the system comprises a construction module, an operation video picture acquisition module, different scene analysis modules inside and outside a body cavity, an operation video editing and splicing module, a video segment output module and a statistical analysis module;
the construction module is used for constructing a database of classification models for training and identifying the internal and external visual fields of the body cavity and constructing a classification model for identifying the internal and external visual fields of the body cavity;
the operation video image acquisition module is used for acquiring continuous images under the field of view of the lens in real time in the operation process and transmitting the continuous images to different scene analysis modules inside and outside the body cavity;
the body cavity inside and outside different scene analysis module is used for accessing and analyzing different operation scenes in the operation video picture through a classification model for identifying the body cavity inside and outside visual fields to obtain classification data of different types of operation scenes;
the operation video editing and splicing module is used for deleting the video picture sequence outside the body cavity and splicing other video segments according to the time sequence according to the identification and analysis results of the different scene analysis modules inside and outside the body cavity for the operation without the operation outside the body cavity; editing and splicing the switching contents of the operation visual field of the operation with the operation outside the body cavity according to the identification and analysis results of different scene analysis modules inside and outside the body cavity to obtain an operation flow of a more continuous flow field;
the video clip output module is used for outputting a video file consisting of spliced video picture sequences;
the statistical analysis module is used for counting the states of the surgical lens inside and outside the body cavity, the times of placing or removing the lens out of the body cavity in the surgical process, the time points and the time length of the classification change and the proportion of the total surgical time length occupied by the lens;
the different scene analysis module inside and outside the body cavity is used for visiting and analyzing different operation scenes in the operation video picture through the classification model of discerning the inside and outside field of vision of body cavity, obtains the classification data of different types of operation scenes, specifically includes:
capturing a current operation scene in real time, inputting captured operation video picture information into a classification model for identifying the visual field inside and outside a body cavity, adjusting the size of a picture, and calculating three channel color histograms of RGB (red, green and blue) of the picture;
taking the pixel values in the histogram as the index of the one-dimensional vector, and taking the number of the corresponding pixel values as the value of the vector index to obtain the color histogram vector of each channel;
converting the RGB three-channel histogram into a one-dimensional vector: splicing the RGB channels into one-dimensional vectors with the length of 255 multiplied by 3 according to the sequence of the RGB channels;
inputting the one-dimensional vector into a three-layer fully-connected classification model, wherein each layer of the classification model comprises a linear calculation layer, a ReLU activation function and a batch normalization layer, and obtaining the intra-cavity and intra-cavity classification results of the picture according to the calculation results of the classification model;
and modifying the body cavity external data which only appears for 1 second into the body cavity internal data by the classification model for identifying the internal and external visual fields of the body cavity, and removing noise through a sliding window with the window size of 5 seconds.
CN202210455912.2A 2022-04-28 2022-04-28 Method and system for distinguishing different scenes inside and outside body cavity Active CN114565881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210455912.2A CN114565881B (en) 2022-04-28 2022-04-28 Method and system for distinguishing different scenes inside and outside body cavity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210455912.2A CN114565881B (en) 2022-04-28 2022-04-28 Method and system for distinguishing different scenes inside and outside body cavity

Publications (2)

Publication Number Publication Date
CN114565881A CN114565881A (en) 2022-05-31
CN114565881B true CN114565881B (en) 2022-07-12

Family

ID=81721078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210455912.2A Active CN114565881B (en) 2022-04-28 2022-04-28 Method and system for distinguishing different scenes inside and outside body cavity

Country Status (1)

Country Link
CN (1) CN114565881B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108882964A (en) * 2015-10-09 2018-11-23 柯惠Lp公司 Make body cavity visualization method with robotic surgical system using angled endoscope
CN109949880A (en) * 2019-01-31 2019-06-28 北京汉博信息技术有限公司 A kind of surgical data processing method
US10383694B1 (en) * 2018-09-12 2019-08-20 Johnson & Johnson Innovation—Jjdc, Inc. Machine-learning-based visual-haptic feedback system for robotic surgical platforms

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007236598A (en) * 2006-03-08 2007-09-20 Pentax Corp Processor and electronic endoscope system
FR2920086A1 (en) * 2007-08-24 2009-02-27 Univ Grenoble 1 ANALYSIS SYSTEM AND METHOD FOR ENDOSCOPY SURGICAL OPERATION
JP6192602B2 (en) * 2014-06-25 2017-09-06 オリンパス株式会社 Image recording device
US20170132785A1 (en) * 2015-11-09 2017-05-11 Xerox Corporation Method and system for evaluating the quality of a surgical procedure from in-vivo video
US10806532B2 (en) * 2017-05-24 2020-10-20 KindHeart, Inc. Surgical simulation system using force sensing and optical tracking and robotic surgery system
US11205508B2 (en) * 2018-05-23 2021-12-21 Verb Surgical Inc. Machine-learning-oriented surgical video analysis system
US11568542B2 (en) * 2019-04-25 2023-01-31 Surgical Safety Technologies Inc. Body-mounted or object-mounted camera system
CN113288452B (en) * 2021-04-23 2022-10-04 北京大学 Operation quality detection method and device
CN113662664B (en) * 2021-09-29 2022-08-16 哈尔滨工业大学 Instrument tracking-based objective and automatic evaluation method for surgical operation quality
CN114170437A (en) * 2021-11-02 2022-03-11 翁莹 Surgical skill rating method and system based on interpretable artificial intelligence

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108882964A (en) * 2015-10-09 2018-11-23 柯惠Lp公司 Make body cavity visualization method with robotic surgical system using angled endoscope
US10383694B1 (en) * 2018-09-12 2019-08-20 Johnson & Johnson Innovation—Jjdc, Inc. Machine-learning-based visual-haptic feedback system for robotic surgical platforms
CN109949880A (en) * 2019-01-31 2019-06-28 北京汉博信息技术有限公司 A kind of surgical data processing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HMM assessment of quality of movement trajectory in laparoscopic surgery;julian j.h.leong;《MICCAI 2006》;20061031;752-759 *

Also Published As

Publication number Publication date
CN114565881A (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN110909780B (en) Image recognition model training and image recognition method, device and system
Niu et al. Rhythmnet: End-to-end heart rate estimation from face via spatial-temporal representation
CN109166130B (en) Image processing method and image processing device
Liang et al. MCFNet: Multi-layer concatenation fusion network for medical images fusion
Ćulibrk et al. Salient motion features for video quality assessment
CN109544518B (en) Method and system applied to bone maturity assessment
CN111369576A (en) Training method of image segmentation model, image segmentation method, device and equipment
US20180182091A1 (en) Method and system for imaging and analysis of anatomical features
CN110729045A (en) Tongue image segmentation method based on context-aware residual error network
Xia et al. A global optimization method for specular highlight removal from a single image
CN111062314A (en) Image selection method and device, computer readable storage medium and electronic equipment
Biniaz et al. Automatic reduction of wireless capsule endoscopy reviewing time based on factorization analysis
US20170323445A1 (en) Morphology identification in tissue samples based on comparison to named feature vectors
CN110338763A (en) A kind of intelligence Chinese medicine examines the image processing method and device of survey
Jaiswal et al. rPPG-FuseNet: non-contact heart rate estimation from facial video via RGB/MSR signal fusion
CN113643297B (en) Computer-aided age analysis method based on neural network
CN114565881B (en) Method and system for distinguishing different scenes inside and outside body cavity
CN111754503A (en) Enteroscope retroreduction overspeed ratio monitoring method based on two-channel convolutional neural network
Codella et al. Segmentation of both diseased and healthy skin from clinical photographs in a primary care setting
Nguyen et al. Gaze tracking for region of interest coding in JPEG 2000
CN114283406A (en) Cell image recognition method, device, equipment, medium and computer program product
CN114049303A (en) Progressive bone age assessment method based on multi-granularity feature fusion
CN113971825A (en) Cross-data-set micro-expression recognition method based on contribution degree of face interesting region
CN113554641A (en) Pediatric pharyngeal image acquisition method and device
Ma Blind image quality assessment: Exploiting new evaluation and design methodologies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant