CN116385977B - Intraoperative bleeding point detection system based on deep learning - Google Patents

Intraoperative bleeding point detection system based on deep learning Download PDF

Info

Publication number
CN116385977B
CN116385977B CN202310660999.1A CN202310660999A CN116385977B CN 116385977 B CN116385977 B CN 116385977B CN 202310660999 A CN202310660999 A CN 202310660999A CN 116385977 B CN116385977 B CN 116385977B
Authority
CN
China
Prior art keywords
bleeding
image
layer
module
bleeding point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310660999.1A
Other languages
Chinese (zh)
Other versions
CN116385977A (en
Inventor
张岚林
许尚栋
代雨洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anzhen Hospital
Original Assignee
Beijing Anzhen Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anzhen Hospital filed Critical Beijing Anzhen Hospital
Priority to CN202310660999.1A priority Critical patent/CN116385977B/en
Publication of CN116385977A publication Critical patent/CN116385977A/en
Application granted granted Critical
Publication of CN116385977B publication Critical patent/CN116385977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an intraoperative bleeding point detection system based on deep learning, which comprises the following components: the image data acquisition module is used for acquiring a video of a suspicious bleeding area in an operation and framing the video of the bleeding area to obtain an infrared image sequence; the marking module is used for marking the target area in the infrared image sequence at the pixel level to obtain a label image corresponding to the original image; the image segmentation and extraction module inputs the label image into a DSCNN-BiLSTM network model, extracts image features of a bleeding area and acquires an image of a target area; and the positioning module is used for positioning the bleeding point and judging the bleeding amount according to the acquired image of the target area. The invention can accurately and effectively identify the bleeding point and effectively reduce the subjective misjudgment rate.

Description

Intraoperative bleeding point detection system based on deep learning
Technical Field
The invention relates to the technical field of image data processing, in particular to an intraoperative bleeding point detection system based on deep learning.
Background
Since heart surgery is mainly open chest surgery, the hemostatic work after the surgery is completed is quite complex. The blood coagulation dysfunction, large intraoperative suture range, large quantity and extracorporeal circulation of patients before operation (the thrombus of the false cavity of the acute aortic dissection patient) are caused by various conditions, and the guarantee technologies of low-temperature operation and the like inevitably cause poor blood coagulation function, serious bleeding and oozing of the patients after operation, so that the postoperative hemostasis is a joint link of whether the operation is successful or not.
At present, bleeding points are still judged in a macroscopic form and whether hemostasis is needed or not, and an operator is difficult to effectively judge the bleeding points and has the possibility of misjudgment and omission judgment, so that the following adverse effects are caused: 1. operating in places where hemostasis is not required may lead to the occurrence of new blood, poor healing, etc.; 2. the omission of the place needing hemostasis leads to large drainage amount after operation, and secondary operation is not controlled even needed, so that the bleeding point is found again. 3. Different operators judge bleeding differently, even if auxiliary devices (infrared thermal imaging devices) are added, the bleeding is judged by naked eyes, and the bleeding stopping difficulty is increased.
Disclosure of Invention
Aiming at the problems, the invention aims to provide an intraoperative bleeding point detection system based on deep learning, which can accurately and effectively identify bleeding points and effectively reduce the subjective misjudgment rate.
In order to achieve the above purpose, the present invention adopts the following technical scheme: an intraoperative bleeding point detection system based on deep learning, comprising: the image data acquisition module is used for acquiring a video of a suspicious bleeding area in an operation and framing the video of the bleeding area to obtain an infrared image sequence; the marking module is used for marking the target area in the infrared image sequence at the pixel level to obtain a label image corresponding to the original image; the image segmentation and extraction module inputs the label image into a DSCNN-BiLSTM network model, extracts image features of a bleeding area and acquires an image of a target area; and the positioning module is used for positioning the bleeding point and judging the bleeding amount according to the acquired image of the target area.
Further, the device also comprises a preprocessing module; the pretreatment module is used for cooling the suspicious bleeding area, flushing the bleeding area and reducing the bleeding condition.
Furthermore, in the image data acquisition module, a thermal infrared imager is adopted to acquire a bleeding area video, and an Opencv video framing method is adopted to convert the bleeding area video into an infrared image sequence.
Further, in the marking module, a 3D slice is adopted to mark a target area in the infrared image sequence at a pixel level.
Further, a region reaching a preset temperature in the infrared image is taken as the target region.
Further, the DSCNN-BiLSTM network model comprises: the system comprises a coarse granularity network module, a bidirectional long and short time memory network module, a dropout layer and a classification layer; the coarseness network module adopts a two-channel convolution neural network structure, performs two-channel coarseness processing on the label image, and then performs feature fusion on the image feature data of the extracted bleeding area of the two channels by the concentration layer; and the bidirectional long-short-term memory network module sequentially inputs a dropout layer and a classification layer after extracting the time sequence characteristics of the fused characteristics, the dropout layer is used for preventing overfitting caused by overlarge parameter quantity of the deep learning model, and the classification layer is used for obtaining the image of the target area.
Further, the convolution neural network structure of the double channels in the coarse granularity network module is as follows: adopting an average pooling layer to replace a double-scale coarse-grained layer;
in the first channel, when the coarsening scale s=1, the channel input is the label image itself; in the second channel, when the coarse-grained scale s=2, a one-dimensional average pooling layer with a pooling size of 2 and a step length of 2 is adopted, and when s=z, a one-dimensional pooling layer with a pooling scale of z and a step length of z is adopted to replace the double-scale coarse-grained layer.
Further, a one-dimensional convolution layer, a BN layer and a maximum pooling layer are adopted in the coarse granularity network module to construct a convolution neural network, and spatial feature extraction is carried out on the label image signals; in the convolutional neural network of each channel, two one-dimensional convolutional layers are arranged, and batch BN layers are added after each one-dimensional convolutional layer and a ReLU activation function is used.
Further, the bleeding amount was judged as: if the infrared image energy at the bleeding point is located in a first preset interval, determining that primary treatment is needed; if the infrared image energy at the bleeding point is located in a second preset interval, judging that secondary treatment is needed; if the infrared image energy at the bleeding point is higher than a third preset value, determining that three-level processing is needed; if the infrared image energy at the bleeding point is lower than the minimum value of the first preset interval, the processing is judged to be unnecessary.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. compared with the prior art, the bleeding point detection device based on deep learning combines infrared thermal imaging image processing with deep learning, can accurately extract bleeding areas and accurately position bleeding points, has small measurement error, high response speed and high sensitivity, and can accurately and reliably detect the bleeding points with high precision and high speed and perform real-time early warning.
2. The method is more accurate in judging the bleeding point, and can judge the bleeding amount of the bleeding point by an artificial intelligence means so as to assist a clinician in further processing.
Drawings
FIG. 1 is a schematic diagram of a deep learning-based intraoperative bleeding point detection device in an embodiment of the present invention;
fig. 2 is a schematic diagram of a DSCNN-BiLSTM network model in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which are obtained by a person skilled in the art based on the described embodiments of the invention, fall within the scope of protection of the invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
In one embodiment of the present invention, an intraoperative bleeding point detection device based on deep learning is provided. In this embodiment, as shown in fig. 1, the apparatus includes:
the image data acquisition module is used for acquiring a video of a suspicious bleeding area in an operation and framing the video of the bleeding area to obtain an infrared image sequence;
the marking module is used for marking the target area in the infrared image sequence at the pixel level to obtain a label image corresponding to the original image;
the image segmentation extraction module inputs the label image into the DSCNN-BiLSTM network model, extracts the image characteristics of the bleeding area, and acquires the image of the target area so as to reduce the detection range of the bleeding area;
and the positioning module is used for positioning the bleeding point and judging the bleeding amount according to the acquired image of the target area.
In one possible embodiment, a preprocessing module is further included prior to acquiring the video of the intraoperative suspicious bleeding area. The pretreatment module is used for cooling the suspicious bleeding area, flushing the bleeding area and reducing the bleeding condition.
In this embodiment, it is preferable to achieve cooling by watering the suspicious bleeding area. For example, the temperature of the flushing liquid is lower than the temperature of a human body by adopting the flushing liquid with the temperature of 23-25 ℃, capillary vessels can be contracted by the low-temperature flushing liquid, and the flushing can simultaneously play a role in cooling.
In one possible implementation, in the image data acquisition module, a thermal infrared imager is used for acquiring the video of the bleeding area, and meanwhile, an Opencv video framing method is used for converting the video of the bleeding area into an infrared image sequence.
Specifically, the thermal infrared imager is arranged on the tripod and is placed at a set position to collect the preprocessed bleeding area video.
In one possible embodiment, in the marking module, a 3D slice is used to mark the target area in the infrared image sequence at the pixel level. And taking the region reaching the preset temperature in the infrared image as a target region.
In one possible embodiment, as shown in fig. 2, the DSCNN-BiLSTM network model includes: coarse granularity network module, two-way long and short time memory network (BiLSTM) module, dropout layer and classification layer.
The coarseness network module adopts a two-channel convolution neural network structure, performs two-channel coarseness processing on the label image, and then performs feature fusion on the image feature data of the extracted bleeding area of the two channels by the concentration layer;
and the bidirectional long-short-term memory network module sequentially inputs a dropout layer and a classification layer after extracting the time sequence characteristics of the fused characteristics, the dropout layer is used for preventing overfitting caused by overlarge parameter quantity of the deep learning model, and the classification layer is used for obtaining the image of the target area. In this embodiment, the number of hidden layer units of the BiLSTM layer is preferably 128, the dropout value is set to 0.5, and the activation function is softmax.
In this example, coarse granularity is: given an original input signalThe operation process of coarsening is shown in the formula (1):
(1)
wherein,,for coarsened signals, N is the original input signal length, < >>For the i-th value of the original input signal, s is the coarsening scale.
Optionally, the two-channel convolutional neural network structure in the coarse-grained network module is as follows: and replacing the double-scale coarse-grained layer with an average pooling layer. Specific:
in the first channel, when the coarsening scale s=1, the channel input is the label image itself; in the second channel, when the coarse-grained scale s=2, a one-dimensional average pooling layer with a pooling size of 2 and a step size of 2 is adopted, i.e. when s=z, a one-dimensional pooling layer with a pooling scale of z and a step size of z is adopted to replace the double-scale coarse-grained layer. The tag image data is coarsened by adopting double-channel input, so that the spatial characteristics of the tag image data can be fully extracted.
Wherein the average pooling layerIs calculated as follows:
(2)
in the method, in the process of the invention,n-th neuron which is the m-th feature region in the first layer, n-th e-m-th elT is the t-th feature region, +.>For pooling window sizes.
Optionally, a one-dimensional convolutional layer, a BN layer and a maximum pooling layer are adopted in the coarse granularity network module to construct a convolutional neural network, and spatial feature extraction is performed on the tag image signals. In the convolutional neural network of each channel, two one-dimensional convolutional layers (Conv 1D) are arranged, a batch normalization layer (BN layer) is added behind each one-dimensional convolutional layer, and a ReLU activation function is used, so that the model training process is ensured to be stable, the model training and the rate of accuracy convergence can be accelerated, and gradient explosion and gradient disappearance are prevented.
Convolutional layer and max pooling layerThe operation of (1) is as follows:
(3)
(4)
wherein,,features extracted for the d-th convolutional layer; f is an activation function; />The weight of the e-th convolution kernel in the d-th layer; * Is convolution operation; />Is an input feature vector; />Is biased; />An nth neuron which is an mth feature region in the first layer; j represents a j-th convolution kernel and is an integer not less than 1; />For pooling window sizes.
In the convolutional neural network module, parameters of each layer are set according to manual experience, as shown in table 1:
after spatial feature extraction is carried out on the label image data through a two-channel convolutional neural network, feature fusion is carried out on the data of the two channels by using a condensing layer, and the operation process is as follows:
(5)
wherein,,the feature extracted for the tag image for the z-th channel,>is a fused feature.
In this embodiment, in model compilation, an Adadelta optimizer is adopted, the initial learning rate is a default value of 1, the decay factor decay value is 0.006, the batch size is 64, and the iteration number is 50.
In one possible embodiment, the image of the target region is subjected to an amplification process prior to input into the deep learning network.
Alternatively, the bleeding amount in this embodiment is determined as: if the infrared image energy at the bleeding point is located in a first preset interval, determining that primary treatment is needed; if the infrared image energy at the bleeding point is located in a second preset interval, judging that secondary treatment is needed; if the infrared image energy at the bleeding point is higher than a third preset value, determining that three-level processing is needed; if the infrared image energy at the bleeding point is lower than the minimum value of the first preset interval, the processing is judged to be unnecessary.
The degree of forcefulness of the processing level is from low to high, and the processing level is as follows: primary treatment, secondary treatment and tertiary treatment.
The primary treatment can be carried out by adopting modes of pressing hemostasis and the like; the secondary treatment can be performed by adopting modes of high-temperature burning and the like to stop bleeding; the three-stage treatment can be performed by sewing and the like to stop bleeding.
In this embodiment, the first preset interval, the second preset interval and the third preset value are all intervals and numerical values set according to specific practical situations and with the experience of an operator, and are not limited herein.
In sum, when the infrared thermal imager is used, the part suspected to bleed is monitored through the infrared thermal imager after complete sterilization, meanwhile, the part is watered and cooled, the temperature at the bleeding opening is higher than the surrounding temperature, and the position with the highest temperature is the bleeding opening, so that the bleeding point is positioned. The collected data are displayed in the eyes of the operator through images on the one hand, and on the other hand, the data are transmitted back to a computer, judged through artificial intelligence, the blood velocity and the bleeding amount are calculated, and the treatment mode of the bleeding point at the position is judged according to the calculation result, and hemostasis is carried out by pressing, high-temperature burning, and sewing or bleeding without treatment is carried out.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. An intraoperative bleeding point detection system based on deep learning, comprising:
the image data acquisition module is used for acquiring a video of a suspicious bleeding area in an operation and framing the video of the bleeding area to obtain an infrared image sequence;
the marking module is used for marking the target area in the infrared image sequence at the pixel level to obtain a label image corresponding to the original image;
the image segmentation and extraction module inputs the label image into a DSCNN-BiLSTM network model, extracts image features of a bleeding area and acquires an image of a target area;
the positioning module is used for positioning bleeding points and judging bleeding amount according to the acquired image of the target area;
a DSCNN-BiLSTM network model, comprising: the system comprises a coarse granularity network module, a bidirectional long and short time memory network module, a dropout layer and a classification layer;
the coarseness network module adopts a two-channel convolution neural network structure, performs two-channel coarseness processing on the label image, and then performs feature fusion on the image feature data of the extracted bleeding area of the two channels by the concentration layer;
and the bidirectional long-short-term memory network module sequentially inputs a dropout layer and a classification layer after extracting the time sequence characteristics of the fused characteristics, the dropout layer is used for preventing overfitting caused by overlarge parameter quantity of the deep learning model, and the classification layer is used for obtaining the image of the target area.
2. The deep learning based intraoperative bleeding point detection system of claim 1, further comprising a preprocessing module; the pretreatment module is used for cooling the suspicious bleeding area, flushing the bleeding area and reducing the bleeding condition.
3. The deep learning based intraoperative bleeding point detection system of claim 1, wherein the image data acquisition module acquires a bleeding area video by using a thermal infrared imager and converts the bleeding area video into an infrared image sequence by using an Opencv video framing method.
4. The deep learning based intraoperative bleeding point detection system of claim 1, wherein the labeling module employs a 3D sleder to pixel-level label a target region in an infrared image sequence.
5. The deep learning based intraoperative bleeding point detection system of claim 4, wherein the target region is a region in the infrared image that reaches a preset temperature.
6. The deep learning-based intraoperative bleeding point detection system of claim 1, wherein the two-channel convolutional neural network structure in the coarse-grained network module is: adopting an average pooling layer to replace a double-scale coarse-grained layer;
in the first channel, when the coarsening scale s=1, the channel input is the label image itself; in the second channel, when the coarse-grained scale s=2, a one-dimensional average pooling layer with a pooling size of 2 and a step length of 2 is adopted, and when s=z, a one-dimensional pooling layer with a pooling scale of z and a step length of z is adopted to replace the double-scale coarse-grained layer.
7. The deep learning-based intraoperative bleeding point detection system of claim 1, wherein a one-dimensional convolutional layer, a BN layer and a maximum pooling layer are adopted in the coarse-grained network module to construct a convolutional neural network, and spatial feature extraction is performed on the label image signal; in the convolutional neural network of each channel, two one-dimensional convolutional layers are arranged, and batch BN layers are added after each one-dimensional convolutional layer and a ReLU activation function is used.
8. The deep learning based intraoperative bleeding point detection system of claim 1, wherein the bleeding volume is determined as: if the infrared image energy at the bleeding point is located in a first preset interval, determining that primary treatment is needed; if the infrared image energy at the bleeding point is located in a second preset interval, judging that secondary treatment is needed; if the infrared image energy at the bleeding point is higher than a third preset value, determining that three-level processing is needed; if the infrared image energy at the bleeding point is lower than the minimum value of the first preset interval, the processing is judged to be unnecessary.
CN202310660999.1A 2023-06-06 2023-06-06 Intraoperative bleeding point detection system based on deep learning Active CN116385977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310660999.1A CN116385977B (en) 2023-06-06 2023-06-06 Intraoperative bleeding point detection system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310660999.1A CN116385977B (en) 2023-06-06 2023-06-06 Intraoperative bleeding point detection system based on deep learning

Publications (2)

Publication Number Publication Date
CN116385977A CN116385977A (en) 2023-07-04
CN116385977B true CN116385977B (en) 2023-08-15

Family

ID=86971691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310660999.1A Active CN116385977B (en) 2023-06-06 2023-06-06 Intraoperative bleeding point detection system based on deep learning

Country Status (1)

Country Link
CN (1) CN116385977B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3014758U (en) * 1995-02-14 1995-08-15 純二 田中 TV image device for endoscope
CN109886389A (en) * 2019-01-09 2019-06-14 南京邮电大学 A kind of novel two-way LSTM neural network construction method based on Highway and DC
CN109978002A (en) * 2019-02-25 2019-07-05 华中科技大学 Endoscopic images hemorrhage of gastrointestinal tract detection method and system based on deep learning
CN112614145A (en) * 2020-12-31 2021-04-06 湘潭大学 Deep learning-based intracranial hemorrhage CT image segmentation method
WO2022066797A1 (en) * 2020-09-23 2022-03-31 Wayne State University Detecting, localizing, assessing, and visualizing bleeding in a surgical field
CN114580707A (en) * 2022-01-26 2022-06-03 安徽农业大学 Emotional tendency prediction model, building method and prediction method of multi-feature fusion product
CN115761365A (en) * 2022-11-28 2023-03-07 首都医科大学附属北京友谊医院 Intraoperative hemorrhage condition determination method and device and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11373750B2 (en) * 2017-09-08 2022-06-28 The General Hospital Corporation Systems and methods for brain hemorrhage classification in medical images using an artificial intelligence network
JP7376677B2 (en) * 2020-02-21 2023-11-08 オリンパス株式会社 Image processing system, endoscope system and method of operating the endoscope system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3014758U (en) * 1995-02-14 1995-08-15 純二 田中 TV image device for endoscope
CN109886389A (en) * 2019-01-09 2019-06-14 南京邮电大学 A kind of novel two-way LSTM neural network construction method based on Highway and DC
CN109978002A (en) * 2019-02-25 2019-07-05 华中科技大学 Endoscopic images hemorrhage of gastrointestinal tract detection method and system based on deep learning
WO2022066797A1 (en) * 2020-09-23 2022-03-31 Wayne State University Detecting, localizing, assessing, and visualizing bleeding in a surgical field
CN112614145A (en) * 2020-12-31 2021-04-06 湘潭大学 Deep learning-based intracranial hemorrhage CT image segmentation method
CN114580707A (en) * 2022-01-26 2022-06-03 安徽农业大学 Emotional tendency prediction model, building method and prediction method of multi-feature fusion product
CN115761365A (en) * 2022-11-28 2023-03-07 首都医科大学附属北京友谊医院 Intraoperative hemorrhage condition determination method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于DSCNN-BiLSTM的入侵检测方法》;商富博 等;《科学技术与工程》;第21卷(第8期);3214-3221 *

Also Published As

Publication number Publication date
CN116385977A (en) 2023-07-04

Similar Documents

Publication Publication Date Title
US12114929B2 (en) Retinopathy recognition system
CN110473192B (en) Digestive tract endoscope image recognition model training and recognition method, device and system
CN109190540B (en) Biopsy region prediction method, image recognition device, and storage medium
CN109377474B (en) Macular positioning method based on improved Faster R-CNN
CN108553081B (en) Diagnosis system based on tongue fur image
CN104224129B (en) A kind of vein blood vessel depth recognition method and prompt system
CN111898580B (en) System, method and equipment for acquiring body temperature and respiration data of people wearing masks
CN111325133B (en) Image processing system based on artificial intelligent recognition
CN111860203B (en) Abnormal pig identification device, system and method based on image and audio mixing
CN111179258A (en) Artificial intelligence method and system for identifying retinal hemorrhage image
CN114445339A (en) Urinary calculus detection and classification method based on improved fast RCNN algorithm
CN109805891A (en) Post-operative recovery state monitoring method, device, system, readable medium and colour atla
CN116385977B (en) Intraoperative bleeding point detection system based on deep learning
CN111862118B (en) Pressure sore staging training method, staging method and staging system
CN110223772A (en) A kind for the treatment of endocrine diseases diagnostic device and data information processing method
Caya et al. Development of Pupil Diameter Determination using Tiny-YOLO Algorithm
CN115082739A (en) Endoscope evaluation method and system based on convolutional neural network
KR102595429B1 (en) Apparatus and method for automatic calculation of bowel preparation
CN114494299A (en) Temperature detection device
CN113380383A (en) Medical monitoring method, device and terminal
CN111260635A (en) Full-automatic fundus photo acquisition, eye disease identification and personalized management system with embedded lightweight artificial neural network
CN115578395B (en) System and method for identifying liquid in drainage bag
CN116746926B (en) Automatic blood sampling method, device, equipment and storage medium based on image recognition
CN108766548A (en) Physical-examination machine auxiliary operation method and system
KR102566890B1 (en) Method for surgical site monitoring and device using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant