CN116919593B - Gallbladder extractor for cholecystectomy - Google Patents

Gallbladder extractor for cholecystectomy Download PDF

Info

Publication number
CN116919593B
CN116919593B CN202310974065.5A CN202310974065A CN116919593B CN 116919593 B CN116919593 B CN 116919593B CN 202310974065 A CN202310974065 A CN 202310974065A CN 116919593 B CN116919593 B CN 116919593B
Authority
CN
China
Prior art keywords
image
computed tomography
laparoscopic
cholecystectomy
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310974065.5A
Other languages
Chinese (zh)
Other versions
CN116919593A (en
Inventor
周超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liyang Traditional Chinese Medicine Hospital
Original Assignee
Liyang Traditional Chinese Medicine Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liyang Traditional Chinese Medicine Hospital filed Critical Liyang Traditional Chinese Medicine Hospital
Priority to CN202310974065.5A priority Critical patent/CN116919593B/en
Publication of CN116919593A publication Critical patent/CN116919593A/en
Application granted granted Critical
Publication of CN116919593B publication Critical patent/CN116919593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/32Surgical cutting instruments
    • A61B17/320016Endoscopic cutting instruments, e.g. arthroscopes, resectoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/32Surgical cutting instruments
    • A61B17/3205Excision instruments
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/32Surgical cutting instruments
    • A61B2017/320052Guides for cutting instruments
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2072Reference field transducer attached to an instrument or patient
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention relates to the technical field of medical appliances, and discloses a gallbladder extractor for cholecystectomy, which comprises the following modules: a computer tomography image set acquisition module, a subset matching module, an image segmentation module, an image feature extraction module, a graph network construction module and a graph neural network model construction module; the invention can simultaneously display the computer tomography image with the same stay angle of the laparoscope when a doctor uses the laparoscope to do cholecystectomy operation, is used for assisting the doctor to observe the position of gall bladder stones and the condition of gall bladder lesions of a patient, and is combined with a graph neural network model to automatically match the computer tomography image, thereby reducing the requirements on the experience and the technology of the doctor.

Description

Gallbladder extractor for cholecystectomy
Technical Field
The invention relates to the field of medical instruments, in particular to a gallbladder extractor for cholecystectomy.
Background
The calculus with wall attached to the gallbladder is also called as the calculus between the walls of the gallbladder, and is a special type of calculus with wall attached to the gallbladder, which is a calculus embedded in the wall of the gallbladder, and the calculus with wall attached to the gallbladder cannot be removed for gallbladder protection, so that the gallbladder part of a lesion part must be resected.
In the laparoscopic excision operation process, the pathological change part can be identified only by observing the appearance, the pathological change part of the calculus between the gallbladder walls is clearly displayed on medical image images such as B ultrasonic image or CT image, but the difference between the appearance and the normal gallbladder is probably not great, a doctor is required to judge the excision part according to the medical image before operation, but the difficulty of corresponding the pathological change image position of the medical image with the laparoscopic observation image position is great, and the doctor experience and technology are depended.
Disclosure of Invention
The invention provides a gall bladder extractor for a gall bladder resection operation, which solves the technical problems that in the related art, a pathological change image position of a medical image corresponds to a laparoscopic observation image position, and the difficulty is high due to the fact that the experience and the technology of doctors are depended.
The invention provides a gallbladder extractor for cholecystectomy operation, comprising the following modules:
a computed tomography image set acquisition module for acquiring a Computed Tomography (CT) image set of a gallbladder region, the computed tomography image set comprising a plurality of subsets, each subset comprising computed tomography images of a same imaging orientation; by presetting acquisition parameters, the number of computed tomography images of each subset is made the same;
a subset matching module for matching a subset according to the spatial angle of the laparoscope while staying;
the image segmentation module is used for acquiring a laparoscope image of a laparoscope stay time period and carrying out image segmentation on the laparoscope image and a matched subset of the computed tomography images to obtain a region image;
the image feature extraction module is used for extracting features of the laparoscopic image, the computed tomography image and the regional image to obtain an image feature vector;
the image network construction module is used for constructing an image network based on the area image, wherein the image network comprises nodes and edges between the nodes; the nodes of the graph network comprise a first node and a second node, wherein the first node corresponds to the laparoscopic image and the computed tomography image, and the second node corresponds to the regional images of the laparoscopic image and the computed tomography image;
and the image neural network model building module is used for inputting the image network and the image characteristic vector into the image neural network model, outputting a classification label which indicates whether the computed tomography image needs to be displayed or not by the image neural network, and transmitting the computed tomography image which needs to be displayed to a display.
In a preferred embodiment, the preset acquisition parameters include slice thickness, which represents the thickness of each computed tomography image slice, i.e. the thickness of each image in the vertical direction along the imaging direction, slice spacing, which represents the distance between adjacent computed tomography image slices.
In a preferred embodiment, the preset acquisition parameters include slice thickness, which represents the thickness of each computed tomography image slice, i.e. the thickness of each image in the vertical direction along the imaging direction, slice spacing, which represents the distance between adjacent computed tomography image slices.
In a preferred embodiment, the laparoscopic dwell time period represents a time period during which the laparoscope remains in the same gallbladder location for more than a set time threshold.
In a preferred embodiment, the laparoscopic or computed tomography image is input into a first convolutional neural network model, the full-connected layer output of which is connected to a classifier whose classification label is expressed as: p= { P 1 …p n },p 1 …p n Discrete values representing the number of gall bladder stones present in the laparoscopic image or the computed tomography image, respectively, and taking the fully connected output input to the first convolutional neural model as an image feature vector of the laparoscopic image or the computed tomography image;
inputting the regional image into a second convolutional neural network model, wherein the output of a full-connection layer of the second convolutional neural network model is connected with a classifier, and the classification label of the classifier is expressed as: q= { Q 1 ,q 2 },q 1 ,q 2 Respectively, whether a lesion appears in the region image.
In a preferred embodiment, edges exist between the second nodes corresponding to adjacent region images of the same laparoscopic image, which means that a connection exists between the adjacent region images of the same laparoscopic image; a side exists between the second nodes corresponding to the adjacent region images of the same computed tomography image, which indicates that a connection exists between the adjacent region images of the same computed tomography image; an edge exists between second nodes corresponding to region images where two adjacent computed tomography images of the same subset overlap in space positions, and the similarity of image feature vectors corresponding to the two adjacent computed tomography images of the same subset is larger than a set threshold; each second node of the region image of the laparoscopic image is connected by an edge to all second nodes of the region image of the computed tomography image of the subset that the laparoscopic image matches.
In a preferred embodiment, the graph neural network model includes N layers;
the calculation formula of the graph neural network model is as follows:
wherein,an intermediate vector representing an ith node of an nth layer, N (i) representing a set of neighbor nodes connected to node i, +.>An intermediate vector representing the jth node of the n-1 layer, W (n) Representing a transformation matrix of an nth layer, sigma representing a sigmoid activation function;
when b=1, the number of the cells is,δ j an image feature vector representing a j-th node;
the intermediate vector of the ith node output by the nth layer of the graph neural network model is taken as a final vector.
In a preferred embodiment, the laparoscope comprises a handheld tube, an implant tube is arranged at one end of the handheld tube, an ocular lens is arranged in the handheld tube, an objective lens is arranged at the end part, far away from the handheld tube, of the implant tube, an illumination optical fiber is further arranged in the handheld tube, the light-emitting end of the illumination optical fiber extends to the objective lens, a fixing sleeve is sleeved outside the implant tube and fixedly arranged on an operating table, a detection unit for detecting the angle of the objective lens when the laparoscope stays is arranged on the inner circumferential surface of the fixing sleeve, the detection unit comprises a first sensor, a second sensor and a third sensor, the first sensor is used for detecting the displacement of the implant tube along the axial movement of the implant tube, the second sensor is used for detecting the rotation angle of the implant tube around the central axis of the implant tube, the third sensor is used for detecting the deflection angle of the central axis of the implant tube relative to the central axis of the fixing sleeve, the surface of the implant tube is coated with a marker, the marker comprises a plurality of radial marker rings and a plurality of axial marker lines, the radial marker rings and the axial marker lines are vertically arranged, the radial marker rings are uniformly distributed along the implantation direction, the radial marker rings are identified by the first sensor, and the axial marker lines are identified by the second sensor.
In a preferred embodiment, the interior of the implant tube is also fitted with a plurality of spaced apart spacer tubes and rods, with adjacent two spacer tubes and rods joined end to end.
The invention has the beneficial effects that:
the invention can simultaneously display the computer tomography image with the same stay angle of the laparoscope when a doctor uses the laparoscope to do cholecystectomy operation, is used for assisting the doctor to observe the position of gall bladder stones and the condition of gall bladder lesions of a patient, and is combined with a graph neural network model to automatically match the computer tomography image, thereby reducing the requirements on the experience and the technology of the doctor.
Drawings
Fig. 1 is a schematic block diagram of the present invention.
Fig. 2 is a schematic view of the external appearance structure of the laparoscope of the present invention.
FIG. 3 is a schematic cross-sectional elevation view of the laparoscope of the present invention.
FIG. 4 is a schematic diagram of the structure of the detecting unit of the present invention.
Fig. 5 is a schematic view of the surface development of an implant according to the present invention.
In the figure: 1. a hand-held tube; 11. an illumination fiber; 2. an implant; 21. a spacer tube; 22. a stick; 23. an objective lens; 3. an eyepiece; 4. a marker; 41. a radial marker ring; 42. axially marking the line; 5. a fixed sleeve; 6. a detection unit; 61. a first sensor; 62. a second sensor; 63. a third sensor; 101. a computed tomography image set acquisition module; 102. a subset matching module; 103. an image segmentation module; 104. an image feature extraction module; 105. a graph network construction module; 106. and a graph neural network model building module.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
1-5, a cholecystectomy device includes
A computed tomography image set acquisition module 101 for acquiring a Computed Tomography (CT) image set of a gallbladder region, the computed tomography image set comprising a plurality of subsets, each subset comprising computed tomography images of the same imaging orientation; by presetting acquisition parameters, the number of computed tomography images of each subset is made the same;
in one embodiment of the invention, the preset acquisition parameters include slice thickness, slice spacing, slice thickness representing the thickness of each computed tomography image slice, i.e., the thickness of each image in the vertical direction along the imaging orientation, slice spacing representing the distance between adjacent computed tomography image slices;
in one embodiment of the invention, the set of computed tomography images comprises three subsets corresponding to imaging orientations of an axial position, a coronal position, and a sagittal position, respectively, the axial position representing a slice image taken along a transverse plane of the patient's body, the coronal position representing a slice image taken along an anterior-posterior direction of the patient's body, and the sagittal position representing a slice image taken along a left-right direction of the patient's body;
a subset matching module 102 for matching a subset according to the spatial angle at which the laparoscope stays;
in one embodiment of the invention, a unified coordinate system is established through the detection result of the detection unit 6 to determine the space angle when the laparoscope stays;
an image segmentation module 103 for acquiring a laparoscopic image of a laparoscopic dwell period, image segmenting the laparoscopic image and a computed tomography image of the matched subset to obtain a region image;
the laparoscope stay time period represents a time period that the stay time of the laparoscope at the same gall bladder part exceeds a set time threshold;
for example, the region image may be obtained by image segmentation of the laparoscopic image and the computed tomography image of the matched subset by a V-Net neural network model;
an image feature extraction module 104 for performing feature extraction on the laparoscopic image, the computed tomography image and the regional image to obtain an image feature vector;
the laparoscopic image or the computed tomography image is input into a first convolution neural network model, and the full connection layer output of the first convolution neural network model is connected with a classifier, and the classification label of the classifier is expressed as: p= { P 1 …p n },p 1 …p n Discrete values representing the number of gall bladder stones present in the laparoscopic image or the computed tomography image, respectively, and taking the fully connected output input to the first convolutional neural model as an image feature vector of the laparoscopic image or the computed tomography image;
inputting the regional image into a second convolutional neural network model, wherein the output of a full-connection layer of the second convolutional neural network model is connected with a classifier, and the classification label of the classifier is expressed as: q= { Q 1 ,q 2 },q 1 ,q 2 Respectively representing whether the region image has lesions;
a graph network construction module 105 for constructing a graph network including nodes and edges between the nodes based on the region images; the nodes of the graph network comprise a first node and a second node, wherein the first node corresponds to the laparoscopic image and the computed tomography image, and the second node corresponds to the regional images of the laparoscopic image and the computed tomography image;
edges exist between the second nodes corresponding to the adjacent regional images of the same laparoscope image, which means that the adjacent regional images of the same laparoscope image are connected; a side exists between the second nodes corresponding to the adjacent region images of the same computed tomography image, which indicates that a connection exists between the adjacent region images of the same computed tomography image; an edge exists between second nodes corresponding to region images where two adjacent computed tomography images of the same subset overlap in space positions, and the similarity of image feature vectors corresponding to the two adjacent computed tomography images of the same subset is larger than a set threshold; each second node of the region image of the laparoscopic image is connected by an edge to all second nodes of the region image of the computed tomography image of the subset that the laparoscopic image matches;
a graphic neural network model construction module 106 for inputting the graphic network and the image feature vector into the graphic neural network model, the graphic neural network outputting a classification label indicating whether the computed tomography image needs to be displayed, and transmitting the computed tomography image that needs to be displayed to the display.
The graph neural network model comprises N layers;
the calculation formula of the graph neural network model is as follows:
wherein,an intermediate vector representing an ith node of an nth layer, N (i) representing a set of neighbor nodes connected to node i, +.>An intermediate vector representing the jth node of the n-1 layer, W (n) Representing the transformation matrix of the nth layer, σ representing sigmoid activates the function;
when b=1, the number of the cells is,δ j an image feature vector representing a j-th node;
the intermediate vector of the ith node output by the nth layer of the graph neural network model is taken as a final vector;
in the training process of the graph neural network model, the classification labels are marked by a professional doctor with gallbladder operation experience;
in performing a cholecystectomy procedure, the laparoscopic images and the computed tomography images corresponding to the imaging orientations are simultaneously displayed during the laparoscopic dwell period, and the computed tomography images of the plurality of imaging orientations may be simultaneously displayed.
In one embodiment of the invention, the cholecystectomy surgical extractor comprises a laparoscope, the laparoscope comprises a handheld tube 1, an implant tube 2 is arranged at one end of the handheld tube 1, an ocular lens 3 is arranged inside the handheld tube 1, an objective lens 23 is arranged at the end of the implant tube 2 far away from the handheld tube 1, an illumination optical fiber 11 is arranged in the handheld tube 1, the light-emitting end of the illumination optical fiber 11 extends to the position of the objective lens 23, a fixing sleeve 5 is sleeved outside the implant tube 2, the fixing sleeve 5 is fixedly arranged on an operating table, a detection unit 6 for detecting the angle of the objective lens 23 when the laparoscope stays is arranged on the inner circumferential surface of the fixing sleeve 5, the detection unit 6 comprises a first sensor 61, a second sensor 62 and a third sensor 63, the first sensor 61 is used for detecting the displacement of the implant tube 2 along the axial direction, the second sensor 62 is used for detecting the angle of the implant tube 2 around the central axis, the third sensor 63 is used for detecting the deflection angle of the central axis of the implant tube 2 relative to the fixing sleeve 5, a marker 4 is coated on the surface of the implant tube 2, the marker 4 comprises a plurality of radial marker rings 42 and a plurality of radial marker rings 41 and a plurality of axial marker materials 42 are distributed along the radial marker rings 41, the axial marker ring 41 are arranged as a plurality of axial marker material 42, and the marker material is identified as the marker 41 is arranged along the radial marker ring 41 is the axial marker 41.
It should be noted that, a first coordinate system and a second coordinate system are established, the first coordinate system is the coordinate system where the objective lens 23 is located, the second coordinate system is the coordinate system where the Computed Tomography (CT) image set is located, where the first coordinate system may be converted into the second coordinate system, that is, each coordinate node on the first coordinate system may be mapped into the second coordinate system to generate a new coordinate node, so as to determine the spatial angle when the laparoscope stays.
It should be further noted that, with the initial position of the objective lens 23 as the start point of the first coordinate system, the laparoscope is rotated in the human body along with the action of the doctor after the doctor inserts the implant tube 2 into the collar at the central position in the fixed sleeve 5, so as to change the coordinate node of the objective lens 23 in the first coordinate system, during the movement of the laparoscope, the first sensor 61 detects the number of the radial marker ring 41 passing through the detection position thereof, so as to determine the travelling length of the objective lens 23, the second sensor 62 detects the change of the number of the axial marker line 42 passing through the detection position thereof, so as to determine the rotation angle of the objective lens 23, the third sensor 63 detects the deflection angle of the central axis of the implant tube 2 relative to the central axis of the fixed sleeve 5, so as to determine the deflection angle of the objective lens 23 relative to the start point of the first coordinate system, and the three parameters obtained by combining the detection unit 6, when the doctor stops the action, the angle of the objective lens 23 in the first coordinate system is determined, and the angle of the laparoscope in the second coordinate system is mapped.
For example, if the angle at which the laparoscope is stopped is in the left-right direction, the corresponding imaging position may be an axial position; if the dwell angle is in the anterior-posterior direction, then the corresponding imaging position may be the coronal position; if the angle of dwell is in the up-down direction, then the corresponding imaging orientation may be the sagittal position.
In one embodiment of the present invention, a plurality of spaced apart spacer tubes 21 and rods 22 are also mounted inside the implant tube 2, with adjacent two spacer tubes 21 and rods 22 being joined end to end.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.

Claims (6)

1. A cholecystectomy procedure cholecystectomy extractor comprising the following modules:
a computed tomography image set acquisition module (101) for acquiring a computed tomography image set of a gallbladder region, the computed tomography image set comprising a plurality of subsets, each subset comprising computed tomography images of a same imaging orientation; by presetting acquisition parameters, the number of computed tomography images of each subset is made the same;
a subset matching module (102) for matching a subset according to the spatial angle at which the laparoscope stays;
an image segmentation module (103) for acquiring a laparoscopic image of a laparoscopic dwell time period, performing image segmentation on the laparoscopic image and a computed tomography image of the matched subset to obtain a region image;
an image feature extraction module (104) for performing feature extraction on the laparoscopic image, the computed tomography image and the regional image to obtain an image feature vector;
a graph network construction module (105) for constructing a graph network based on the region images, the graph network including nodes and edges between the nodes; the nodes of the graph network comprise a first node and a second node, wherein the first node corresponds to the laparoscopic image and the computed tomography image, and the second node corresponds to the regional images of the laparoscopic image and the computed tomography image;
a graphic neural network model construction module (106) for inputting the graphic network and the image feature vector into the graphic neural network model, the graphic neural network outputting a classification label indicating whether the computed tomography image needs to be displayed, and transmitting the computed tomography image to be displayed to a display;
inputting laparoscopic or computed tomography imagesThe full-connection layer output of the first convolutional neural network model is connected with a classifier, and the classification label of the classifier is expressed as: p= { P 1 …p n },p 1 …p n Discrete values representing the number of gall bladder stones present in the laparoscopic image or the computed tomography image, respectively, and taking the fully connected output input to the first convolutional neural model as an image feature vector of the laparoscopic image or the computed tomography image;
inputting the regional image into a second convolutional neural network model, wherein the output of a full-connection layer of the second convolutional neural network model is connected with a classifier, and the classification label of the classifier is expressed as: q= { Q 1 ,q 2 },q 1 ,q 2 Respectively representing whether the region image has lesions;
edges exist between the second nodes corresponding to the adjacent regional images of the same laparoscope image, which means that the adjacent regional images of the same laparoscope image are connected; a side exists between the second nodes corresponding to the adjacent region images of the same computed tomography image, which indicates that a connection exists between the adjacent region images of the same computed tomography image; an edge exists between second nodes corresponding to region images where two adjacent computed tomography images of the same subset overlap in space positions, and the similarity of image feature vectors corresponding to the two adjacent computed tomography images of the same subset is larger than a set threshold; each second node of the region image of the laparoscopic image is connected by an edge to all second nodes of the region image of the computed tomography image of the subset that the laparoscopic image matches;
the graph neural network model comprises N layers;
the calculation formula of the graph neural network model is as follows:
wherein,an intermediate vector representing an ith node of an nth layer, N (i) representing a set of neighbor nodes connected to node i, +.>An intermediate vector representing the jth node of the n-1 layer, W (n) Representing a transformation matrix of an nth layer, sigma representing a sigmoid activation function;
when b=1, the number of the cells is,δ j an image feature vector representing a j-th node;
the intermediate vector of the ith node output by the nth layer of the graph neural network model is taken as a final vector.
2. A cholecystectomy procedure cholecystectomy device according to claim 1, wherein the preset acquisition parameters comprise a slice thickness, which represents the thickness of each computed tomography image slice, i.e. the thickness of each image in a direction perpendicular to the imaging orientation, and a slice interval, which represents the distance between adjacent computed tomography image slices.
3. A cholecystectomy device according to claim 2, wherein the laparoscopic dwell time period is indicative of a period of time during which the laparoscope remains in the same cholecystectomy site for more than a set time threshold.
4. A cholecystectomy operation gallbladder extractor according to claim 3, characterized in that the laparoscope comprises a hand-held tube (1), an implant tube (2) is mounted at one end of the hand-held tube (1), an ocular lens (3) is mounted in the hand-held tube (1), an objective lens (23) is mounted at the end of the implant tube (2) far away from the hand-held tube (1), an illumination optical fiber (11) is mounted in the hand-held tube (1), the light-emitting end of the illumination optical fiber (11) extends to the objective lens (23), a fixing sleeve (5) is sleeved outside the implant tube (2), the fixing sleeve (5) is fixedly mounted on an operation table, and a detection unit (6) for detecting the angle of the objective lens (23) when the laparoscope stays is arranged on the inner circumferential surface of the fixing sleeve (5).
5. A cholecystectomy surgical cholecystectomy device according to claim 4, characterized in that the detection unit (6) comprises a first sensor (61), a second sensor (62) and a third sensor (63), the first sensor (61) is used for detecting the displacement of the implant (2) along its axial movement, the second sensor (62) is used for detecting the rotation angle of the implant (2) around its central axis, the third sensor (63) is used for detecting the rotation angle of the central axis of the implant (2) with respect to the central axis of the fixation sheath (5), the surface of the implant (2) is coated with a marker (4), the marker (4) comprises a plurality of radial marker rings (41) and a plurality of axial marker lines (42), the radial marker rings (41) and the axial marker lines (42) are arranged vertically, the plurality of radial marker rings (41) are uniformly distributed axially along the implant (2), the radial marker rings (41) are the material identified by the first sensor (61), and the axial marker lines (42) are the material identified by the second sensor (62).
6. A cholecystectomy device according to claim 5, characterized in that the implant (2) is further internally provided with a plurality of spaced apart spacers (21) and rods (22), and adjacent spacers (21) and rods (22) are connected end to end.
CN202310974065.5A 2023-08-04 2023-08-04 Gallbladder extractor for cholecystectomy Active CN116919593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310974065.5A CN116919593B (en) 2023-08-04 2023-08-04 Gallbladder extractor for cholecystectomy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310974065.5A CN116919593B (en) 2023-08-04 2023-08-04 Gallbladder extractor for cholecystectomy

Publications (2)

Publication Number Publication Date
CN116919593A CN116919593A (en) 2023-10-24
CN116919593B true CN116919593B (en) 2024-02-06

Family

ID=88380640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310974065.5A Active CN116919593B (en) 2023-08-04 2023-08-04 Gallbladder extractor for cholecystectomy

Country Status (1)

Country Link
CN (1) CN116919593B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583576A (en) * 2018-12-17 2019-04-05 上海联影智能医疗科技有限公司 A kind of medical image processing devices and method
CN112932663A (en) * 2021-03-02 2021-06-11 成都与睿创新科技有限公司 Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy
CN114494364A (en) * 2021-12-20 2022-05-13 中国科学院深圳先进技术研究院 Liver three-dimensional ultrasonic and CT image registration initialization method and device and electronic equipment
CN114901194A (en) * 2019-12-31 2022-08-12 奥瑞斯健康公司 Anatomical feature identification and targeting
WO2022204605A1 (en) * 2021-03-26 2022-09-29 The General Hospital Corporation Interpretation of intraoperative sensor data using concept graph neural networks
CN115359873A (en) * 2022-10-17 2022-11-18 成都与睿创新科技有限公司 Control method for operation quality
CN115546589A (en) * 2022-11-29 2022-12-30 浙江大学 Image generation method based on graph neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8267853B2 (en) * 2008-06-23 2012-09-18 Southwest Research Institute System and method for overlaying ultrasound imagery on a laparoscopic camera display

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583576A (en) * 2018-12-17 2019-04-05 上海联影智能医疗科技有限公司 A kind of medical image processing devices and method
CN114901194A (en) * 2019-12-31 2022-08-12 奥瑞斯健康公司 Anatomical feature identification and targeting
CN112932663A (en) * 2021-03-02 2021-06-11 成都与睿创新科技有限公司 Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy
WO2022204605A1 (en) * 2021-03-26 2022-09-29 The General Hospital Corporation Interpretation of intraoperative sensor data using concept graph neural networks
CN114494364A (en) * 2021-12-20 2022-05-13 中国科学院深圳先进技术研究院 Liver three-dimensional ultrasonic and CT image registration initialization method and device and electronic equipment
CN115359873A (en) * 2022-10-17 2022-11-18 成都与睿创新科技有限公司 Control method for operation quality
CN115546589A (en) * 2022-11-29 2022-12-30 浙江大学 Image generation method based on graph neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
医学图像分析深度学习方法研究与挑战;田娟秀;刘国才;谷珊珊;鞠忠建;刘劲光;顾冬冬;;自动化学报(03);第19-42页 *

Also Published As

Publication number Publication date
CN116919593A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
EP2811889B1 (en) Invisible bifurcation detection within vessel tree images
US8116847B2 (en) System and method for determining an optimal surgical trajectory
CN106456271B (en) The quantitative three-dimensional imaging and printing of surgery implant
US10085672B2 (en) Diagnostic endoscopic imaging support apparatus and method, and non-transitory computer readable medium on which is recorded diagnostic endoscopic imaging support program
JP4899068B2 (en) Medical image observation support device
JP5504028B2 (en) Observation support system, method and program
JP3820244B2 (en) Insertion support system
JP4418400B2 (en) Image display device
KR20200073245A (en) Image-based branch detection and mapping for navigation
CN112741692B (en) Rapid navigation method and system for realizing device navigation to target tissue position
EP3936026B1 (en) Medical image processing device, processor device, endoscopic system, medical image processing method, and program
JP6254053B2 (en) Endoscopic image diagnosis support apparatus, system and program, and operation method of endoscopic image diagnosis support apparatus
CN102596003B (en) System for determining airway diameter using endoscope
CN114340540B (en) Instrument image reliability system and method
JP2003265408A (en) Endoscope guide device and method
JP7190059B2 (en) Image matching method, apparatus, device and storage medium
CN106473807A (en) Preplaned using the automatic ENT surgical operation of backtracking maze problem solution
WO2012165572A1 (en) Medical image display apparatus and medical image diagnostic apparatus
JP2023083555A (en) Medical image processing apparatus, endoscope system, medical image processing system, method of operating medical image processing apparatus, program, and storage medium
JP4686279B2 (en) Medical diagnostic apparatus and diagnostic support apparatus
CN116919593B (en) Gallbladder extractor for cholecystectomy
JP2008054763A (en) Medical image diagnostic apparatus
JP2008018016A (en) Medical image processing equipment and method
CN111466952B (en) Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image
Díaz et al. Robot based Transurethral Bladder Tumor Resection with automatic detection of tumor cells

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant