CN115731175A - Surgical ligation auxiliary method, device and related equipment - Google Patents

Surgical ligation auxiliary method, device and related equipment Download PDF

Info

Publication number
CN115731175A
CN115731175A CN202211431164.0A CN202211431164A CN115731175A CN 115731175 A CN115731175 A CN 115731175A CN 202211431164 A CN202211431164 A CN 202211431164A CN 115731175 A CN115731175 A CN 115731175A
Authority
CN
China
Prior art keywords
target
determining
detection frame
endoscope image
abnormal object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211431164.0A
Other languages
Chinese (zh)
Inventor
杨振宇
胡珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Endoangel Medical Technology Co Ltd
Original Assignee
Wuhan Endoangel Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Endoangel Medical Technology Co Ltd filed Critical Wuhan Endoangel Medical Technology Co Ltd
Priority to CN202211431164.0A priority Critical patent/CN115731175A/en
Publication of CN115731175A publication Critical patent/CN115731175A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Endoscopes (AREA)

Abstract

The application provides an operation ligation auxiliary method, an operation ligation auxiliary device and related equipment, wherein a detection frame used for marking the position of a target abnormal object and position information of the detection frame are obtained by identifying the target abnormal object in a pre-acquired endoscope image; acquiring position characteristics, epithelial attribute characteristics, vein attribute characteristics and lymph node attribute characteristics of each target pixel point in an endoscope image, wherein the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame; determining a data set of a target type object included in an endoscopic image based on the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image; and determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object. Therefore, the intelligent degree of the operation ligation assistance is improved, and the misdiagnosis rate and subsequent complications of the patient are reduced.

Description

Surgical ligation auxiliary method, device and related equipment
Technical Field
The application relates to the technical field of auxiliary medical treatment, in particular to an operation ligation auxiliary method, an operation ligation auxiliary device and related equipment.
Background
An anorectoscope is an endoscope, also called anoscope or rectoscope, which is used to examine the rectum. The anus endoscopy is one of the routine examination methods for anorectal diseases, is suitable for pathological changes at the tail end of the anus and rectum and near dentate lines, and can also be used for biopsy.
The dentate line of the anus is an important anatomical structure around the anus, above the anus developed from the endoderm and below the anus developed from the ectoderm, which is an important boundary. Clinically, it is common to refer to the target abnormality above the dentate line as an internal abnormality, the site below the dentate line as an external abnormality, and to refer to both as mixed abnormalities, and to refer to hemorrhoids above the dentate line as internal hemorrhoids, and hemorrhoids below the dentate line as external hemorrhoids, and mixed hemorrhoids. The inventor of the application finds that doctors are difficult to accurately judge the specific position of the target abnormal object in the dentate line in the surgical ligation process, and serious pain is easily caused to patients.
Therefore, in order to effectively avoid the above problems, how to improve the intelligence of the surgical ligation assistance is a technical problem which needs to be solved in the technical field of the current assisted medical treatment.
Disclosure of Invention
The application provides an auxiliary method and device for surgical ligation and related equipment, and aims to solve the problem of how to effectively improve the intelligence of surgical ligation assistance.
In one aspect, the present application provides a surgical ligation assistance method, the method comprising:
identifying a target abnormal object in a pre-acquired endoscope image to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy;
acquiring position characteristics, epithelial attribute characteristics, vein attribute characteristics and lymph node attribute characteristics of each target pixel point in the endoscope image, wherein the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame;
determining a data set of a target type object included in the endoscopic image based on the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image;
determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
In one possible implementation manner of the present application, the determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object includes:
if the data set of the target type object only comprises the dentate line anal canal, determining that the auxiliary strategy for performing surgical ligation on the target abnormal object is to prompt a doctor not to suggest surgical ligation;
if the data set of the target type object only comprises dentate line rectum, determining an auxiliary strategy for performing surgical ligation on the target abnormal object as a prompt that a doctor can perform normal surgical ligation;
if the data set of the target type object comprises the dentate line anal canal and the dentate line rectum at the same time, acquiring a boundary line between the dentate line anal canal and the dentate line rectum;
and determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the boundary line.
In one possible implementation manner of the present application, the determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the boundary line includes:
carrying out image transformation processing on the boundary line to obtain a surgery early warning area with a target shape;
determining the position relation between the target abnormal object and the operation early warning area based on the position information of the detection frame and the operation early warning area;
if the position relationship is that the target abnormal object and the operation early warning area have an intersection relationship, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor not to suggest operation ligation;
if the position relationship is that the target abnormal object and the operation early warning area are in a separated relationship, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor to perform normal operation ligation.
In one possible implementation manner of the present application, obtaining the position characteristics of each target pixel point in the endoscopic image includes:
acquiring a lens body outline in the endoscope image;
acquiring a central line of the mirror body outline and two intersection points of the central line and the mirror body outline;
determining the position of the anus opening based on the distance from each intersection point to the central point of the endoscope image;
and determining the position characteristics of each target pixel point based on the Euclidean distance from each target pixel point in the endoscope image to the anus opening position.
In one possible implementation manner of the present application, the determining the anus opening position based on the distance from each intersection point to the central point of the endoscopic image includes:
respectively obtaining the distance from each intersection point to the central point of the endoscope image;
and comparing the distance from each intersection point to the central point of the endoscope image, and selecting the intersection point with the shorter distance from the central point of the endoscope image as the position of the anus opening.
In a possible implementation manner of the present application, the determining a data set of a target type object included in an endoscopic image based on a position feature, an epithelial attribute feature, a venous attribute feature, and a lymph node attribute feature of each target pixel point in the endoscopic image includes:
performing weighted fitting on the position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscope image to obtain target type object parameters;
determining a data set of a target type object included in the endoscopic image based on the target type object parameter and a preset target type object parameter threshold.
In a possible implementation manner of the present application, the identifying a target abnormal object in a pre-acquired endoscope image to obtain a detection frame for marking a position of the target abnormal object and position information of the detection frame includes:
and identifying the target abnormal object in the endoscope image acquired in advance based on a target abnormal object identification model trained in advance to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame.
In another aspect, the present application provides a surgical ligation aid, the device comprising:
the first identification unit is used for identifying a target abnormal object in an endoscope image acquired in advance to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy;
the endoscope image acquisition device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring the position characteristics, the epithelial attribute characteristics, the vein attribute characteristics and the lymph node attribute characteristics of each target pixel point in the endoscope image, the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame;
the first determining unit is used for determining a data set of a target type object included in the endoscope image based on the position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscope image;
and the second determination unit is used for determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
In a possible implementation manner of the present application, the second determining unit specifically includes:
a third determining unit, configured to determine, if the data set of the target type object only includes the dentate line anal canal, that the auxiliary strategy for performing surgical ligation on the target abnormal object is to prompt a physician not to suggest surgical ligation;
a fourth determining unit, configured to determine, if the data set of the target type object only includes a dentate line rectum, that the auxiliary strategy for performing surgical ligation on the target abnormal object is to prompt a physician that normal surgical ligation is possible;
a second obtaining unit, configured to obtain a boundary line between the dentate line anal canal and the dentate line rectum if the data set of the target type object includes the dentate line anal canal and the dentate line rectum at the same time;
and the fifth determining unit is used for determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the boundary line.
In a possible implementation manner of the present application, the fifth determining unit is specifically configured to:
carrying out image transformation processing on the boundary line to obtain a surgery early warning area with a target shape;
determining the position relation between the target abnormal object and the operation early warning area based on the position information of the detection frame and the operation early warning area;
if the position relation is that the target abnormal object and the operation early warning area have an intersection relation, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor not to suggest operation ligation;
if the position relationship is that the target abnormal object and the operation early warning area are in a separated relationship, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor to perform normal operation ligation.
In a possible implementation manner of the present application, the first obtaining unit specifically includes:
the third acquisition unit is used for acquiring the endoscope body outline in the endoscope image;
the fourth acquisition unit is used for acquiring a center line of the mirror body outline and two intersection points of the center line and the mirror body outline;
a sixth determining unit, configured to determine an anal orifice position based on a distance from each intersection point to a center point of the endoscopic image;
and the seventh determining unit is used for determining the position characteristics of each target pixel point based on the Euclidean distance from each target pixel point in the endoscope image to the anus opening position.
In a possible implementation manner of the present application, the sixth determining unit is specifically configured to:
respectively obtaining the distance from each intersection point to the central point of the endoscope image;
and comparing the distance from each intersection point to the central point of the endoscope image, and selecting the intersection point with the shorter distance from the central point of the endoscope image as the position of the anus opening.
In a possible implementation manner of the present application, the first determining unit is specifically configured to:
performing weighted fitting on the position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscope image to obtain target type object parameters;
and determining a data set of the target type object included in the endoscopic image based on the target type object parameter and a preset target type object parameter threshold.
In a possible implementation manner of the present application, the first identifying unit is specifically configured to:
and identifying the target abnormal object in the endoscope image acquired in advance based on a target abnormal object identification model trained in advance to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame.
In another aspect, the present application further provides a computer device, including:
one or more processors;
a memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the surgical ligation assistance method.
In another aspect, the present application further provides a computer readable storage medium having a computer program stored thereon, the computer program being loaded by a processor to perform the steps of the surgical ligation assistance method.
According to the surgical ligation auxiliary method provided by the embodiment of the application, a detection frame for marking the position of a target abnormal object and position information of the detection frame are obtained by identifying the target abnormal object in a pre-acquired endoscope image, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy; acquiring position characteristics, epithelial attribute characteristics, vein attribute characteristics and lymph node attribute characteristics of each target pixel point in the endoscope image, wherein the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame; determining a data set of a target type object included in the endoscopic image based on the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image; determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object. Compared with the traditional method, under the condition that the intelligent assistance on the surgical ligation of the target type cannot be effectively carried out, the method and the system intelligently give the auxiliary strategy for carrying out the surgical ligation on the abnormal target object by automatically identifying the abnormal target object and utilizing the position information of the corresponding detection frame and the relation between the data sets of the target type object included in the identified endoscope image, improve the intelligent degree of the surgical ligation assistance, and reduce the misdiagnosis rate and subsequent complications of a patient.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a surgical ligation assistance system provided in an embodiment of the present application;
fig. 2 is a schematic flow diagram of one embodiment of a surgical ligation assistance method provided in embodiments of the present application;
FIG. 3 is a schematic representation of an endoscopic image provided in an embodiment of the present application;
FIG. 4 is a schematic view of the centerlines and intersection points of the mirror body contours provided in an embodiment of the present application;
FIG. 5 is a schematic illustration of a surgical warning area provided in an embodiment of the present application;
fig. 6 is a schematic structural view of one embodiment of a surgical ligation aid provided in embodiments of the present application;
fig. 7 is a schematic structural diagram of an embodiment of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be considered as limiting the present application. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In this application, the word "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes are not set forth in detail in order to avoid obscuring the description of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
The embodiment of the application provides an auxiliary method, an auxiliary device and related equipment for surgical ligation, which are respectively described in detail below.
As shown in fig. 1, fig. 1 is a schematic view of a surgical ligation auxiliary system provided in an embodiment of the present application, and the surgical ligation auxiliary system may include a computer device 100, where a surgical ligation auxiliary apparatus, such as the computer device 100 in fig. 1, is integrated in the computer device 100.
In the embodiment of the present application, the computer device 100 is mainly configured to identify a target abnormal object in a pre-acquired endoscope image, and obtain a detection frame for marking a position of the target abnormal object and position information of the detection frame, where the endoscope image is an endoscope image captured during an anal endoscopy examination of a patient; acquiring position characteristics, epithelial attribute characteristics, vein attribute characteristics and lymph node attribute characteristics of each target pixel point in the endoscope image, wherein the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame; determining a data set of a target type object included in the endoscopic image based on the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image; determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
In this embodiment, the computer device 100 may be a terminal or a server, and when the computer device 100 is a server, it may be an independent server, or may be a server network or a server cluster composed of servers, for example, the computer device 100 described in this embodiment includes, but is not limited to, a computer, a network host, a single network server, multiple network server sets, or a cloud server constructed by multiple servers. Among them, the Cloud server is constructed by a large number of computers or web servers based on Cloud Computing (Cloud Computing).
It is to be understood that, when the computer device 100 is a terminal in the embodiment of the present application, the terminal used may be a device including both receiving and transmitting hardware, that is, a device having receiving and transmitting hardware capable of performing bidirectional communication on a bidirectional communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display. The specific computer device 100 may specifically be a desktop terminal or a mobile terminal, and the computer device 100 may also specifically be one of a mobile phone, a tablet computer, a notebook computer, a medical auxiliary instrument, and the like.
It can be understood by those skilled in the art that the application environment shown in fig. 1 is only one application scenario of the present application, and is not intended to limit the application scenario of the present application, and other application environments may further include more or less computer devices than those shown in fig. 1, for example, only 1 computer device is shown in fig. 1, and it can be understood that the surgical ligation assistance system may further include one or more other computer devices, which is not limited herein.
In addition, as shown in fig. 1, the surgical ligation auxiliary system may further include a memory 200 for storing data, such as an endoscopic image taken when the patient enters an anal endoscopy and surgical ligation auxiliary data, for example, surgical ligation auxiliary data during the operation of the surgical ligation auxiliary system.
It should be noted that the scenario diagram of the surgical ligation auxiliary system shown in fig. 1 is merely an example, and the surgical ligation auxiliary system and the scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not constitute a limitation to the technical solution provided in the embodiment of the present application, and it is obvious to those skilled in the art that the technical solution provided in the embodiment of the present application is also applicable to similar technical problems with the evolution of the surgical ligation auxiliary system and the appearance of new business scenarios.
Next, a surgical ligation assistance method provided in the embodiment of the present application will be described.
In an embodiment of the surgical ligation auxiliary method according to the present application, a surgical ligation auxiliary device is used as an executing body, which is omitted in the following method embodiments for simplicity and convenience of description, and the surgical ligation auxiliary device is applied to a computer apparatus, and the method includes: identifying a target abnormal object in a pre-acquired endoscope image to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy; acquiring position characteristics, epithelial attribute characteristics, vein attribute characteristics and lymph node attribute characteristics of each target pixel point in the endoscope image, wherein the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame; determining a data set of a target type object included in the endoscopic image based on the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image; determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
Referring to fig. 2 to fig. 7, fig. 2 is a schematic flowchart illustrating an embodiment of a surgical ligation assisting method provided in an embodiment of the present application, where the surgical ligation assisting method includes steps 201 to 204:
201. and identifying the target abnormal object in the endoscope image acquired in advance to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame.
The endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy, and the target abnormal object is an abnormal object in an area to be examined, such as polyp, hemorrhoid or other foreign bodies.
It should be noted that before step 201 is performed, the video captured by the anal endoscopy device needs to be preprocessed to obtain the endoscopic image thereof, specifically, the video may be decoded into the first target image in RGB format, then the obtained first target image is subjected to size adjustment processing, and is adjusted to a preset target size, the target size in the present application may be set according to an actual requirement, preferably 640 x 640.
In the embodiment of the application, a target abnormal object in an endoscope image acquired in advance can be identified specifically based on a target abnormal object identification model trained in advance, and a detection frame for marking the position of the target abnormal object and position information of the detection frame are obtained, wherein the target abnormal object identification model preferably selects a yolov7 network structure.
According to the method and the device, the target abnormal object in the endoscope image acquired in advance can be efficiently and accurately identified by adopting the pre-trained yolov7 network structure-based target abnormal object identification model, and the overall identification efficiency and accuracy are improved.
202. And acquiring the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscope image.
The target pixel points are pixel points in a target area, and the target area comprises areas except the detection frame.
The method can be used for sequentially or simultaneously acquiring the position characteristics, the epithelial attribute characteristics, the venous attribute characteristics and the lymph node attribute characteristics of each target pixel point in the endoscopic image in a plurality of ways, wherein the position characteristics of each target pixel point in the endoscopic image refer to the specific position characteristics of a human body part actually shot by each target pixel point in the endoscopic image, and the specific position characteristics can be, for example, a dentate line rectum, a dentate line anal canal, or a dentate line rectum, a dentate line anal canal and a dentate line; the epithelium attribute feature refers to the epithelium attribute feature corresponding to the human body part actually photographed by each target pixel point in the endoscope image, for example, the epithelium attribute feature may be a single-layer columnar epithelium or a multiple-layer squamous epithelium; likewise, the vein attribute features may include the portal vein and inferior vena cava; lymph node attribute characteristics included the inferior mesenteric a lymph node and the inguinal superficial lymph node.
For example, in the embodiment of the present application, obtaining the position characteristics of each target pixel point in the endoscopic image may specifically include steps A1 to A4:
a1, acquiring a lens contour in the endoscope image;
the endoscope body contour is the contour of the endoscope body of the anus endoscope inspection equipment, as shown in the following fig. 3, the shooting visual angle corresponding to the image is shot reversely by the anus endoscope inspection equipment positioned in the body, a black rod similar to the image in the upper right region in the image is the endoscope body of the anus endoscope inspection equipment, and the method is specific, the endoscope image can be segmented through a pre-trained endoscope body segmentation model in the embodiment of the application, a segmentation image is obtained, the segmentation image is a binary image of the endoscope body, then the contour of the endoscope body is extracted by scanning all pixel points of the binary image of the endoscope body, so that the endoscope body contour in the endoscope image is obtained, wherein the preferable Unet + + network is selected for the endoscope body segmentation model, and the specific method for extracting the contour of the endoscope body is as follows:
Figure BDA0003940762420000111
where U (i, 8) is an eight neighborhood centered at point i.
A2, acquiring a center line of the mirror body outline and two intersection points of the center line and the mirror body outline;
as shown in fig. 4 below, fig. 4 shows a center line dividing the mirror body contour, and two intersection points of the center line and the mirror body contour.
In some embodiments of the present application, a straight line closest to all points on the mirror body profile may be fitted by a least square method, that is, a central line of the mirror body profile, and then the central line is extended to intersect with the mirror body profile, so as to obtain two intersection points thereof.
A3, determining the position of the anus opening based on the distance from each intersection point to the central point of the endoscope image;
wherein, there is the shooting visual angle that mentions that the figure corresponds down above that the anus endoscopy equipment has been located the human body, and owing to shoot in reverse, so can see the mirror body, in addition, anus endoscopy equipment has human anus to insert, consequently, can exist anus mouth position in figure 4 down.
In an embodiment of the present application, the determining the position of the anal orifice based on the distance from each intersection point to the central point of the endoscopic image may specifically include B1 and B2:
b1, respectively obtaining the distance from each intersection point to the central point of the endoscope image;
if the endoscope image is a rectangular image, the central point of the endoscope image is the intersection point of the diagonal lines of the endoscope image, and the central point can be marked in the image in advance and the coordinates of the central point can be marked.
In one embodiment, the coordinate of the center point of the endoscopic image is assumed to be (x) 0 ,y 0 ) The coordinates of the two intersection points are respectively (x) 1 ,y 1 )、(x 2 ,y 2 ) Then, euclidean distances d1 and d2 from their respective intersection points to the center point can be calculated from their coordinates.
And B2, comparing the distance from each intersection point to the central point of the endoscope image, and selecting the intersection point with the shorter distance from the central point of the endoscope image as the position of the anus opening.
As can be seen from the above example in step B1, if d1<d2, setting the coordinates (x) of the position of the anus opening g ,y g )=(x 1 ,y 1 ) If d1>d2, setting the coordinates (x) of the position of the anus opening g ,y g )=(x 2 ,y 2 )。
And A4, determining the position characteristics of each target pixel point based on the Euclidean distance from each target pixel point in the endoscope image to the anus opening position.
Wherein, each pixel point (x) in the endoscope image can be counted first n ,y n ) Sit to anusLabel (x) g ,y g ) And then determining the position characteristic rho of each pixel point by the following formula s
Figure BDA0003940762420000121
For example, in the embodiment of the application, the epithelium attribute feature ρ of each target pixel point in the endoscope image may be identified through a pre-trained epithelium feature classification model e Wherein the model is labeled as 0-monolayer columnar epithelium and 1-stratified squamous epithelium.
For example, in the embodiment of the application, the vein attribute feature ρ of each target pixel point in the endoscopic image may be identified through a pre-trained vein feature classification model v Wherein the labels of the model are 0-portal vein and 1-inferior vena cava.
For example, in the embodiment of the application, the lymph node attribute feature ρ of each target pixel point in the endoscope image may be identified through a pre-trained lymph feature classification model l Wherein the labels of the model are 0-mesenteric A lymph node and 1-inguinal shallow lymph node.
203. And determining a data set of the target type object included in the endoscopic image based on the position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image.
In an embodiment of the present application, the determining a data set of a target type object included in the endoscopic image based on a position characteristic, an epithelial attribute characteristic, a venous attribute characteristic, and a lymph node attribute characteristic of each target pixel point in the endoscopic image includes steps C1 and C2:
c1, performing weighted fitting on the position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscope image to obtain target type object parameters;
in a specific embodiment, the target type object parameter P is calculated as shown in the following formula:
P=ω 1 ρ s2 ρ e3 ρ v4 ρ l
wherein, ω is 1 、ω 2 、ω 3 、ω 4 Weights trained for a preset machine learning algorithm, where ρ s 、ρ e 、ρ v 、ρ l The position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point are sequentially obtained.
And C2, determining a data set of the target type object included in the endoscope image based on the target type object parameter and a preset target type object parameter threshold value.
The target type object parameter threshold may be set according to actual requirements, and is not particularly limited.
In the embodiment of the present application, if the target-type object parameter threshold may be P, and the target-type object parameter threshold is P0, it is determined that the dentate-line anal canal exists when P is greater than P0, and it is determined that the dentate-line rectum exists otherwise.
Specifically, if the target type object parameter P corresponding to each target pixel point is greater than the target type object parameter threshold P0, it is determined that the data set of the target type object included in the endoscopic image only includes the dentate line anal canal; if the target type object parameter P corresponding to each target pixel point is smaller than the target type object parameter threshold value P0, determining that the data set of the target type object included in the endoscopic image only includes a dentate line rectum; and if the target type object parameters P corresponding to one part of the target pixel points are greater than the target type object parameter threshold value P0, and the target type object parameters P corresponding to the other part of the target pixel points are less than the target type object parameter threshold value P0, determining that the data set of the target type object included in the endoscope image simultaneously comprises a dentate line anal canal, a dentate line and a dentate line rectum.
204. And determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
In some embodiments of the present application, the determining an auxiliary strategy for surgical ligation of the target abnormal object based on the position information of the detection frame and the data set of the target type object includes steps D1 to D4:
d1, if the data set of the target type object only comprises a dentate line anal canal, determining that an auxiliary strategy for performing surgical ligation on the target abnormal object is to prompt a doctor not to suggest surgical ligation;
in particular, after surgical ligation is not advised, the physician may also be advised to take other suitable approaches directed to treating the target abnormal object in the dentate-anal canal region.
D2, if the data set of the target type object only comprises dentate line rectum, determining that an auxiliary strategy for performing surgical ligation on the target abnormal object is to prompt a doctor to perform normal surgical ligation;
d3, if the data set of the target type object simultaneously comprises the dentate line anal canal and the dentate line rectum, acquiring a boundary line between the dentate line anal canal and the dentate line rectum;
and D4, determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the boundary line.
According to the embodiment of the application, different auxiliary strategies are provided for different positions where the target abnormal object is located through the disclosed scheme, and the humanization and the intellectualization of the scheme are improved.
In an embodiment of the present application, the determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the boundary line includes steps E1 to E4:
e1, performing image transformation processing on the boundary line to obtain a surgery early warning area of a target shape;
specifically, the image transformation processing specifically adopts hough circle transformation, so as to obtain a circular operation early warning area.
E2, determining the position relation between the target abnormal object and the operation early warning area based on the position information of the detection frame and the operation early warning area;
specifically, the position relationship between the target abnormal object and the operation early warning area can be determined by calculating the intersection ratio between the detection frame and the operation early warning area, and the position relationship between the target abnormal object and the operation early warning area generally includes separation and intersection.
E3, if the position relation is that the target abnormal object and the operation early warning area have an intersection relation, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor not to suggest operation ligation;
and E4, if the position relationship is that the target abnormal object and the operation early warning area are in a separated relationship, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor that normal operation ligation can be performed.
Compared with the traditional method, the surgical ligation auxiliary method disclosed by the embodiment of the application can not be used for intelligently assisting the surgical ligation of the target type effectively, the abnormal target is automatically identified, the position information of the corresponding detection frame is utilized, and the relation between the data sets of the target type object included in the identified endoscope image is utilized, so that the auxiliary strategy for performing surgical ligation on the abnormal target is intelligently provided, the intelligent degree of the surgical ligation auxiliary is improved, and the misdiagnosis rate and subsequent complications of a patient are reduced.
In order to better implement the surgical ligation auxiliary method in the embodiment of the present application, there is provided a surgical ligation auxiliary device in the embodiment of the present application on the basis of the surgical ligation auxiliary method, as shown in fig. 6, wherein the surgical ligation auxiliary device 600 includes:
a first identification unit 601, configured to identify a target abnormal object in an endoscope image acquired in advance, and obtain a detection frame for marking a position of the target abnormal object and position information of the detection frame, where the endoscope image is an endoscope image captured during an anal endoscopy of a patient;
a first obtaining unit 602, configured to obtain a position feature, an epithelial attribute feature, a vein attribute feature, and a lymph node attribute feature of each target pixel in the endoscopic image, where the target pixel is a pixel in a target region, and the target region includes a region except a detection frame;
a first determining unit 603, configured to determine a data set of a target type object included in the endoscopic image based on a position feature, an epithelial attribute feature, a vein attribute feature, and a lymph node attribute feature of each target pixel in the endoscopic image;
a second determining unit 604, configured to determine an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
In a possible implementation manner of the present application, the second determining unit 604 specifically includes:
a third determining unit, configured to determine, if the data set of the target type object only includes the dentate line anal canal, that the auxiliary strategy for performing surgical ligation on the target abnormal object is to prompt a physician not to suggest surgical ligation;
a fourth determining unit, configured to determine, if the data set of the target type object only includes a dentate line rectum, that the auxiliary strategy for performing surgical ligation on the target abnormal object is to prompt a physician that normal surgical ligation is possible;
a second obtaining unit, configured to obtain a boundary line between the dentate line anal canal and the dentate line rectum if the data set of the target type object includes the dentate line anal canal and the dentate line rectum at the same time;
and the fifth determining unit is used for determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the boundary line.
In a possible implementation manner of the present application, the fifth determining unit is specifically configured to:
carrying out image transformation processing on the boundary line to obtain a surgery early warning area with a target shape;
determining the position relation between the target abnormal object and the operation early warning area based on the position information of the detection frame and the operation early warning area;
if the position relation is that the target abnormal object and the operation early warning area have an intersection relation, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor not to suggest operation ligation;
if the position relationship is that the target abnormal object and the operation early warning area are in a separated relationship, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor to perform normal operation ligation.
In a possible implementation manner of the present application, the first obtaining unit 602 specifically includes:
the third acquisition unit is used for acquiring the endoscope body outline in the endoscope image;
the fourth acquisition unit is used for acquiring a center line of the mirror body outline and two intersection points of the center line and the mirror body outline;
a sixth determining unit configured to determine an anal orifice position based on a distance from each of the intersection points to a center point of the endoscopic image;
and the seventh determining unit is used for determining the position characteristics of each target pixel point based on the Euclidean distance from each target pixel point in the endoscope image to the anus opening position.
In a possible implementation manner of the present application, the sixth determining unit is specifically configured to:
respectively obtaining the distance from each intersection point to the central point of the endoscope image;
and comparing the distance from each intersection point to the central point of the endoscope image, and selecting the intersection point with the shorter distance from the central point of the endoscope image as the position of the anus opening.
In a possible implementation manner of the present application, the first determining unit 603 is specifically configured to:
performing weighted fitting on the position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscope image to obtain target type object parameters;
and determining a data set of the target type object included in the endoscopic image based on the target type object parameter and a preset target type object parameter threshold.
In a possible implementation manner of the present application, the first identifying unit 601 is specifically configured to:
and identifying the target abnormal object in the endoscope image acquired in advance based on a target abnormal object identification model trained in advance to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame.
The surgical ligation auxiliary device provided by the embodiment of the application is used for identifying a target abnormal object in a pre-acquired endoscope image through the first identification unit 601 to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy; a first obtaining unit 602, configured to obtain a position feature, an epithelial attribute feature, a vein attribute feature, and a lymph node attribute feature of each target pixel in the endoscopic image, where the target pixel is a pixel in a target region, and the target region includes a region except a detection frame; a first determining unit 603, configured to determine a data set of a target type object included in the endoscopic image based on a position feature, an epithelial attribute feature, a vein attribute feature, and a lymph node attribute feature of each target pixel in the endoscopic image; a second determining unit 604, configured to determine an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object. Compared with the traditional device, under the condition that the intelligent assistance can not be effectively carried out on the surgical ligation of the target type, the method and the device have the advantages that the abnormal target is automatically identified, the position information of the corresponding detection frame is utilized, and the relation between the data sets of the target type object included in the identified endoscope image is utilized, so that the auxiliary strategy for carrying out the surgical ligation on the abnormal target is intelligently given, the intelligent degree of the surgical ligation assistance is improved, and the misdiagnosis rate and subsequent complications of a patient are reduced.
In addition to the method and device for assisting surgical ligation described above, embodiments of the present application further provide a computer device that integrates any one of the surgical ligation assisting devices provided in embodiments of the present application, the computer device including:
one or more processors;
a memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to perform, by the processor, the operations of any of the methods described in any of the embodiments of the surgical ligation assistance methods described above.
Embodiments of the present application also provide a computer device, which integrates any one of the surgical ligation auxiliary devices provided in embodiments of the present application. Fig. 7 is a schematic diagram showing a structure of a computer device according to an embodiment of the present application, specifically:
the computer device may include components such as a processor 701 of one or more processing cores, a storage unit 702 of one or more computer-readable storage media, a power supply 703, and an input unit 704. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 7 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 701 is a control center of the computer apparatus, connects various parts of the entire computer apparatus using various interfaces and lines, and performs various functions of the computer apparatus and processes data by running or executing software programs and/or modules stored in the storage unit 702 and calling data stored in the storage unit 702, thereby performing overall monitoring of the computer apparatus. Optionally, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
The storage unit 702 may be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by operating the software programs and modules stored in the storage unit 702. The storage unit 702 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the storage unit 702 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory unit 702 may further include a memory controller to provide the processor 701 with access to the memory unit 702.
The computer device further includes a power supply 703 for supplying power to the various components, and preferably, the power supply 703 is logically connected to the processor 701 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system. The power supply 703 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 704, the input unit 704 being operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment of the present application, the processor 701 in the computer device loads an executable file corresponding to a process of one or more application programs into the storage unit 702 according to the following instructions, and the processor 701 runs the application programs stored in the storage unit 702, so as to implement various functions as follows:
identifying a target abnormal object in a pre-acquired endoscope image to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy; acquiring position characteristics, epithelial attribute characteristics, vein attribute characteristics and lymph node attribute characteristics of each target pixel point in the endoscope image, wherein the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame; determining a data set of a target type object included in the endoscopic image based on the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image; determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
The application provides an operation ligature auxiliary method, compare in traditional method, under the unable effectively supplementary condition of carrying out intelligence to the operation ligature of target type, this application is through the unusual object of automatic identification target to utilize the relation between the positional information of its detection frame that corresponds and the data set of the target type object that includes in the scope image that discerns, thereby give the auxiliary strategy of carrying out the operation ligature to the unusual object of target intelligently, the supplementary intelligent degree of operation ligature has been improved, patient's misdiagnosis rate and subsequent complication are reduced.
To this end, an embodiment of the present application provides a computer-readable storage medium, which may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like. The computer readable storage medium has stored therein a plurality of instructions that can be loaded by the processor to perform the steps of any of the surgical ligation assistance methods provided by the embodiments of the present application. For example, the instructions may perform the steps of:
identifying a target abnormal object in a pre-acquired endoscope image to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy; acquiring position characteristics, epithelial attribute characteristics, vein attribute characteristics and lymph node attribute characteristics of each target pixel point in the endoscope image, wherein the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame; determining a data set of a target type object included in the endoscopic image based on the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image; determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The surgical ligation auxiliary method, the surgical ligation auxiliary device and the related equipment provided by the embodiment of the application are described in detail, the principle and the implementation mode of the application are explained by applying specific examples, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A surgical ligation assistance method, the method comprising:
identifying a target abnormal object in a pre-acquired endoscope image to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy;
acquiring position characteristics, epithelial attribute characteristics, vein attribute characteristics and lymph node attribute characteristics of each target pixel point in the endoscope image, wherein the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame;
determining a data set of a target type object included in the endoscopic image based on the position characteristic, the epithelial attribute characteristic, the venous attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscopic image;
determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
2. The method for assisting surgical ligation according to claim 1, wherein the determining an assisting strategy for surgical ligation of the target abnormal object based on the position information of the detection frame and the data set of the target type object comprises:
if the data set of the target type object only comprises the dentate line anal canal, determining that the auxiliary strategy for performing surgical ligation on the target abnormal object is to prompt a doctor not to suggest surgical ligation;
if the data set of the target type object only comprises dentate line rectum, determining an auxiliary strategy for performing surgical ligation on the target abnormal object as a prompt that a doctor can perform normal surgical ligation;
if the data set of the target type object simultaneously comprises the dentate line anal canal and the dentate line rectum, acquiring a boundary line between the dentate line anal canal and the dentate line rectum;
and determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the boundary line.
3. The method for assisting surgical ligation according to claim 2, wherein the determining an assisting strategy for surgical ligation of the target abnormal object based on the position information of the detection frame and the boundary line includes:
carrying out image transformation processing on the boundary line to obtain a surgery early warning area with a target shape;
determining the position relation between the target abnormal object and the operation early warning area based on the position information of the detection frame and the operation early warning area;
if the position relationship is that the target abnormal object and the operation early warning area have an intersection relationship, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor not to suggest operation ligation;
if the position relationship is that the target abnormal object and the operation early warning area are in a separated relationship, determining that an auxiliary strategy for performing operation ligation on the target abnormal object is to prompt a doctor to perform normal operation ligation.
4. The method for assisting surgical ligation according to claim 1, wherein the obtaining of the position characteristics of each target pixel point in the endoscopic image comprises:
acquiring a lens body contour in the endoscope image;
acquiring a central line of the mirror body outline and two intersection points of the central line and the mirror body outline;
determining the position of the anus opening based on the distance from each intersection point to the central point of the endoscope image;
and determining the position characteristics of each target pixel point based on the Euclidean distance from each target pixel point in the endoscope image to the anus opening position.
5. The surgical ligation assistance method according to claim 4, wherein the determining the anal orifice location based on the distance of each intersection point to the center point of the endoscopic image comprises:
respectively obtaining the distance from each intersection point to the central point of the endoscope image;
and comparing the distance from each intersection point to the central point of the endoscope image, and selecting the intersection point with the shorter distance from the central point of the endoscope image as the position of the anus opening.
6. The method for assisting surgical ligation according to claim 1, wherein the determining the data set of the target type object included in the endoscopic image based on the position feature, the epithelial attribute feature, the venous attribute feature and the lymph node attribute feature of each target pixel point in the endoscopic image comprises:
performing weighted fitting on the position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscope image to obtain target type object parameters;
and determining a data set of the target type object included in the endoscopic image based on the target type object parameter and a preset target type object parameter threshold.
7. The surgical ligation auxiliary method according to claim 1, wherein the identifying a target abnormal object in a pre-acquired endoscope image, and obtaining a detection frame for marking a position of the target abnormal object and position information of the detection frame comprises:
and identifying the target abnormal object in the endoscope image acquired in advance based on a target abnormal object identification model trained in advance to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame.
8. A surgical ligation aid, the device comprising:
the first identification unit is used for identifying a target abnormal object in an endoscope image acquired in advance to obtain a detection frame for marking the position of the target abnormal object and position information of the detection frame, wherein the endoscope image is an endoscope image shot when a patient is subjected to anal endoscopy;
the endoscope image acquisition device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring the position characteristics, the epithelial attribute characteristics, the vein attribute characteristics and the lymph node attribute characteristics of each target pixel point in the endoscope image, the target pixel points are pixel points in a target area, and the target area comprises an area except a detection frame;
the first determining unit is used for determining a data set of a target type object included in the endoscope image based on the position characteristic, the epithelial attribute characteristic, the vein attribute characteristic and the lymph node attribute characteristic of each target pixel point in the endoscope image;
and the second determination unit is used for determining an auxiliary strategy for performing surgical ligation on the target abnormal object based on the position information of the detection frame and the data set of the target type object.
9. A computer device, characterized in that the computer device comprises:
one or more processors;
a memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the surgical ligation assistance method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon a computer program which is loaded by a processor to perform the steps of the surgical ligation assistance method according to any one of claims 1 to 7.
CN202211431164.0A 2022-11-14 2022-11-14 Surgical ligation auxiliary method, device and related equipment Pending CN115731175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211431164.0A CN115731175A (en) 2022-11-14 2022-11-14 Surgical ligation auxiliary method, device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211431164.0A CN115731175A (en) 2022-11-14 2022-11-14 Surgical ligation auxiliary method, device and related equipment

Publications (1)

Publication Number Publication Date
CN115731175A true CN115731175A (en) 2023-03-03

Family

ID=85295966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211431164.0A Pending CN115731175A (en) 2022-11-14 2022-11-14 Surgical ligation auxiliary method, device and related equipment

Country Status (1)

Country Link
CN (1) CN115731175A (en)

Similar Documents

Publication Publication Date Title
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
US20200260944A1 (en) Method and device for recognizing macular region, and computer-readable storage medium
CN109002846B (en) Image recognition method, device and storage medium
WO2019037676A1 (en) Image processing method and device
WO2021184600A1 (en) Image segmentation method, apparatus and device, and computer-readable storage medium
US11783488B2 (en) Method and device of extracting label in medical image
CN113724243B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115393356B (en) Target part abnormal form recognition method and device and computer readable storage medium
CN114627067A (en) Wound area measurement and auxiliary diagnosis and treatment method based on image processing
CN114417037B (en) Image processing method, device, terminal and readable storage medium
CN114387320B (en) Medical image registration method, device, terminal and computer-readable storage medium
CN113344926B (en) Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image
CN115937209A (en) Method and device for identifying image abnormality of nasopharyngoscope
CN111815610A (en) Lesion focus detection method and device of lesion image
TW202044271A (en) Method and system for analyzing skin texture and skin lesion using artificial intelligence cloud based platform
CN114419050B (en) Gastric mucosa visualization degree quantification method and device, terminal and readable storage medium
CN115731175A (en) Surgical ligation auxiliary method, device and related equipment
CN114511558B (en) Method and device for detecting cleanliness of intestinal tract
US20230196568A1 (en) Angiography image determination method and angiography image determination device
CN114419041B (en) Method and device for identifying focus color
CN113706536B (en) Sliding mirror risk early warning method and device and computer readable storage medium
CN115778546B (en) Intelligent auxiliary method and device for endoscopic submucosal dissection and related equipment
CN115393230B (en) Ultrasonic endoscope image standardization method and device and related device thereof
CN115690060A (en) Method and device for determining abnormality of eye ultrasonic image and related equipment
CN114511045B (en) Image processing method, device, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination