CN116993805A - Intraoperative residual organ volume estimation system oriented to operation planning assistance - Google Patents

Intraoperative residual organ volume estimation system oriented to operation planning assistance Download PDF

Info

Publication number
CN116993805A
CN116993805A CN202310419428.9A CN202310419428A CN116993805A CN 116993805 A CN116993805 A CN 116993805A CN 202310419428 A CN202310419428 A CN 202310419428A CN 116993805 A CN116993805 A CN 116993805A
Authority
CN
China
Prior art keywords
tissue
grid model
preoperative
vertex
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310419428.9A
Other languages
Chinese (zh)
Inventor
杨善林
李霄剑
郑杰禹
李玲
莫杭杰
欧阳波
王昕�
吴昊均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202310419428.9A priority Critical patent/CN116993805A/en
Publication of CN116993805A publication Critical patent/CN116993805A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Abstract

The invention provides an intraoperative residual organ volume estimation system oriented to surgical planning assistance, and relates to the field of minimally invasive surgery. According to the invention, through the matching relation between the preoperative tissue grid model and the intraoperative tissue grid model, the measurement of the volume of the residual organ is converted into the volume measurement of the corresponding area of the preoperative tissue grid model, so that the interference of complex deformation of tissues and invisible areas on volume prediction is avoided; and in addition, interaction with a doctor in the operation is considered, simple labels of the doctor aiming at the region of interest are received, accurate volume measurement information is obtained, and active selectivity and high reference in the operation are realized. In addition, the introduced binocular endoscope-based online self-supervision learning depth estimation method has the advantage that the binocular depth estimation network used by the binocular endoscope-based online self-supervision learning depth estimation method has the capability of fast overlearning, can continuously adapt to new scenes by utilizing self-supervision information, and further ensures the accuracy of the intraoperative tissue grid model.

Description

Intraoperative residual organ volume estimation system oriented to operation planning assistance
Technical Field
The invention relates to the field of minimally invasive surgery, in particular to an intraoperative residual organ volume estimation system for surgery planning assistance.
Background
Compared with the traditional open surgery, the minimally invasive surgery (such as the endoscopic surgery) has the advantages of small wound, less bleeding, high recovery speed and the like, and is gradually and widely adopted. The evaluation indexes of the minimally invasive surgery are numerous, including the operation time, the operation bleeding amount, the postoperative complication occurrence rate, the recovery time and the like. However, for organ resection, the remaining organ volume is one of the important evaluation criteria.
According to the type and actual situation of the organ, there are various methods for measuring the residual volume of the organ at present, and some common technical schemes for measuring the residual volume of the organ are provided as follows:
(1) CT, MRI scan, pre-or post-operative: by performing three-dimensional CT or MRI scanning, the volume of organs such as liver, lung and the like can be calculated, so that the volume and the proportion of the residual organs before and after the operation can be estimated. However, this method cannot accurately distinguish between the organ and the surrounding tissue, i.e. the boundary of the measured organ may be blurred, resulting in an inaccurate measurement result.
(2) Intraoperative ultrasound examination: ultrasonic examination is a noninvasive examination method, and can evaluate the size and shape of organs such as liver, gall bladder, pancreas and the like, thereby indirectly calculating the residual volume. However, firstly, the method requires a certain technical level and experience of doctors, and the operation level and experience of different doctors can influence the accuracy of the measurement result; secondly, the measurement of the organ is affected by factors such as depth, angle, distance, scanning plane and the like, and errors are easy to occur; finally, there are limitations to the measurement of some organs, such as deep organs like heart and lung, which are blocked by ribs, and it is difficult to obtain accurate volume data.
(3) Direct post-operative measurement: in surgery or dissection, the organ is directly taken out and soaked in liquid, and the displacement of the liquid is measured to estimate the volume of the organ. However, the method can only be used for the organ excised after operation, can not be applied to the volume measurement of living organs, and can not give prompt and guidance to doctors in operation.
In view of this, there is a need to provide an intraoperative residual organ volume estimation system with more accurate measurement results.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides an intraoperative residual organ volume estimation system oriented to the assistance of surgical planning, which solves the technical problem of inaccurate intraoperative measurement results.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme:
an intraoperative residual organ volume estimation system for surgical planning assistance, comprising:
the registration module is used for registering the preoperative tissue grid model and the intraoperative tissue grid model to obtain an integral tissue grid model which displays the internal tissue information of the preoperative tissue grid model in the intraoperative tissue grid model;
the tissue grid model in the operation is obtained according to the depth value of the appointed binocular endoscope image frame;
The first acquisition module is used for receiving a region to be excised marked on the region of interest of the appointed binocular endoscope image frame by a doctor and acquiring a corresponding top point set of all pixel points in the region to be excised in the whole tissue grid model;
the second acquisition module is used for acquiring and visualizing the corresponding vertex set on the preoperative tissue grid model according to the corresponding vertex set in the whole tissue grid model;
the solving module is used for receiving the cutting direction and the manual candidate region marked on the preoperative tissue grid model by a doctor, acquiring the local volume of the organ corresponding to the region to be excised by combining the corresponding vertex set on the preoperative tissue grid model, and finally acquiring the volume or the volume percentage of the residual organ.
Preferably, the registration module includes:
the first modeling unit is used for acquiring a preoperative organization grid model with organization semantic information;
the second modeling unit is used for acquiring an intraoperative tissue grid model according to the depth value of the appointed binocular endoscope image frame;
the feature extraction unit is used for respectively acquiring corresponding multi-level features according to the preoperative tissue grid model and the intraoperative tissue grid model;
The overlap prediction unit is used for acquiring an overlap region of the preoperative tissue grid model and the intraoperative tissue grid model according to the multi-level characteristics, and acquiring a pose transformation relationship of the vertex of the preoperative tissue grid model in the overlap region;
the global fusion unit is used for acquiring all vertex coordinates of the preoperative tissue grid model after registration according to the transformation relation between the coordinates and the pose of the vertices of the preoperative tissue grid model in the overlapping area and the coordinates of the vertices of the preoperative tissue grid model in the non-overlapping area;
and the information display unit is used for acquiring an integral tissue grid model which displays the internal tissue information of the preoperative tissue grid model in the operative tissue grid model according to all the vertex coordinates of the preoperative tissue grid model after registration.
Preferably, the feature extraction unit adopts chebyshev spectrogram convolution to extract multi-level features of the preoperative tissue grid model and the intraoperative tissue grid model:
wherein, a preoperative tissue grid model M is defined pre =(V pre ,E pre ),V pre Representing three-dimensional coordinates of vertices of a preoperative tissue mesh model, E pre Representing edges between vertices of the preoperative tissue mesh model; intraoperative tissue mesh model M in =(V in ,E in ),V in Representing three-dimensional coordinates of vertices of the preoperative tissue mesh model, E in Representing edges between vertices of the intra-operative tissue mesh model;
and />The downsampled scale features of the n+1th and nth layers, respectively, representing the preoperative tissue model, initialize +.>Is V (V) pre ;/> and />Respectively representing the characteristics of the n+1st layer and the n th layer of the intraoperative tissue model, initializing +.>Is V (V) in
B-order chebyshev polynomials calculated from the respective vertices and their B-ring neighbors, respectively,/->Respectively by edge E in ,E pre Calculated scaled Laplace matrix, +.>Is a learning parameter of the neural network;
and/or the overlap prediction unit is specifically configured to:
acquiring the overlapping region of the preoperative tissue grid model and the intra-operative tissue grid model by adopting an attention mechanism comprises the following steps:
wherein ,Opre Representing a preoperative tissue mesh model M pre Masking of the overlapping region; o (O) in Representation ofIntraoperative tissue mesh model M in Masking of the overlapping region; cross and self represent self-attrition and cross-attrition operations, respectively; and />M-th-level downsampling scale features of vertexes of the preoperative tissue grid model and the intraoperative tissue grid model are respectively represented;
according to mask O pre and Oin Acquiring vertices each in the overlapping regionAnd its characteristicsAnd calculates an arithmetic front organization grid model M by using a multi-layer perceptron MLP pre Vertex of->Corresponding points of (3):
wherein ,is an intraoperative tissue grid model M in Corresponds to the vertex of the preoperative tissue mesh model M pre Vertex of-> Representing cosine similarity calculation,/->Representing performing position coding operation on vertexes of the intraoperative tissue grid model in an overlapping region;
establishing vertices using nearest neighbor search KNNAdopts singular value decomposition SVD to solve a rotation matrix, and has the following formula:
wherein ,representing vertex->Is a rotation matrix of (a); />Representing the construction of vertices using KNN algorithmIs a local neighborhood of (a); />Is the vertex of the preoperative tissue mesh model +.>Is (are) neighborhood points->Is corresponding to the neighborhood point->Vertex of the intra-operative tissue mesh model;
using a rotation matrixChanging the point cloud coordinates to get->Predicting vertex +.>The formula is as follows:
wherein ,displacement vectors of vertices of the preoperative tissue grid model in the overlapping region and are combined with a rotation matrixForming the pose transformation relation;
and/or the global fusion unit is specifically configured to:
rotational matrix and displacement vectors of all vertices of the MLP regression preoperative tissue mesh model are used:
wherein ,Rpre ,t pre Respectively representing a rotation matrix and a displacement vector of all vertexes of the preoperative tissue grid model; Representing the vertex in the overlapping region +.>All vertices v of the mesh model with preoperative tissue pre Is a weight of distance calculation of (a);
wherein ,representing all vertex coordinates of the pre-operative tissue mesh model after registration.
Preferably, during a training phase of the intraoperative residual organ volume estimation system, a training set is generated based on the real data:
according to the characteristic point pair between the appointed binocular endoscope image frame and the preoperative tissue grid model, registering the preoperative tissue grid model and the intraoperative tissue grid model by adopting a non-rigid algorithm based on the characteristic points, wherein for any characteristic point, the method comprises the following steps:
where Non-rib ICP represents the Non-rigid registration algorithm ICP,representing the a-th feature point of the preoperative tissue grid model for non-rigid registration, +.>Correspond to->Feature points of the intra-operative tissue grid model, T G T is the integral transfer matrix of the preoperative tissue grid model l,a Is of the characteristic point v pre,a Is a local deformation transfer matrix of (a);
obtaining a local deformation transfer matrix T of all vertexes in the preoperative tissue grid model by four-element interpolation l Obtaining vertex v in preoperative tissue grid model through transformation relation pre Registered coordinate label
Preferably, during the training phase of the intraoperative residual organ volume estimation system, the following supervised loss function is constructed:
Wherein, loss s Representing a supervised loss function for the training phase;
β s 、γ s respectively representing supervised loss term coefficients;
N 1 representing a preoperative combined mesh model M pre The number of vertices of (a);
indicating the loss of true value of l2 based on manually annotated data set,/->Representing all vertex coordinates of the pre-operative tissue mesh model after registration;
I c +II c +III c represents Ke Xige forest invariants for restraining the degree of tissue deformation in vivo, I c The length of the arc distance between two points of the constraint surface is unchanged, II c Constraint tissue surface area is unchanged, III c The volume of the constraint tissue is unchanged.
Preferably, the registration module further comprises:
the precision fine tuning unit is used for introducing an unsupervised loss fine tuning network and assisting the global fusion unit to acquire all vertex coordinates of the preoperative tissue grid model after registration;
and/or the unsupervised loss fine tuning network constructs the following unsupervised loss function in the application process:
wherein, loss u Representing an unsupervised loss function;
β uu respectively represent the non-supervision loss term coefficients, and />Vertex coordinates after registration of preoperative tissue grid model during unsupervised training are adopted, and the vertex coordinates are +.>Vertices representing pre-operative tissue mesh model after distance registration in the intra-operative tissue mesh model +.>Is (are) nearest points of- > Representing vertex-> and />European distance,/, of->Vertices v representing a distance intra-operative tissue mesh model in a registered pre-operative tissue mesh model in,b Is the closest point of (a) to (b),representing vertex v in,b And vertex->Is a Euclidean distance of (2);
N 1 representing a preoperative tissue mesh model M pre Number of vertices, N 2 Tissue mesh model M in representation in The number of vertices of (a);
ke Xige forest invariant, < ->The length of the arc distance between two points of the constraint surface is unchanged, +.>Constraining tissue surface area unchanged ++>The volume of the constraint tissue is unchanged.
Preferably, defining an intraoperative tissue grid model M of all pixel points in the region to be resected in The set of the top points is P s The method comprises the steps of carrying out a first treatment on the surface of the All pixel points in the region to be resected are in the whole tissue grid model M trans The corresponding vertex set in (2) is P trans Wherein any vertex is p trans The method comprises the steps of carrying out a first treatment on the surface of the The preoperative tissue grid model M pre The corresponding set of vertices on is P ct Wherein any vertex is p ct
The first acquisition module is specifically configured to s Acquiring M by adopting nearest neighbor algorithm trans P on trans Any vertex p of trans
And/or the second acquisition module is used for acquiring the data according to R pre 、t pre and ptrans Acquire and visualize M pre P on ct Any vertex p of ct
p ct =R pre p trans +t pre
Preferably, the solving module is specifically configured to:
Traversal P ct Vertex p of (b) ct Acquiring the cutting direction v and the manual candidate region pro A plurality of nearest vertexes, and obtaining a corresponding vertex p 'by averaging the plurality of vertexes' ct Form a corresponding set of vertices P' ct
According to the set of vertices P ct and P′ct Pairing spatial points p ct and p′ct Generating a plurality of irregular pentahedrons in the region S to be resected;
calculating the volume of each irregular pentahedron and summing to obtain the local volume V of the organ corresponding to the region to be resected cut Combined with the whole volume V of the preoperative organ all Finally, the volume V of the residual organ is obtained remain =V all -V cut Or the volume percentage of the remaining organ
Wherein, any pentahedron A 1 A 2 A 3 B 1 B 2 B 3 The calculation steps of the volume of (2) are as follows:
along with B 1 B 2 、B 2 B 3 Parallel direction as auxiliary line B 1 ′A 2 、A 2 B 3 ' dividing the irregular pentahedron into triangular prisms B 1 B 2 B 3 B 1 ′A 2 B 3 ' and rectangular pyramid A 1 A 3 B 3 ′B 1 ′A 2
Calculation plane B 1 B 2 B 3 Area A 2 And plane B 1 B 2 B 3 Distance h of perpendicular to (2) prism Obtaining triangular prism B 1 B 2 B 3 B 1 ′A 2 B 3 ' volume:
V prism =S ΔB1B2B3 ×h prism
calculation plane A 1 A 3 B 3 ′B 1 ' area, A 2 With plane A 1 A 3 B 3 B 1 Distance h of perpendicular to (2) pyramid Obtaining a rectangular pyramid A 1 A 3 B 3 ′B 1 ′A 2 Volume:
volume V of each pentahedron final The method comprises the following steps:
V final =V prism +V pyramid
preferably, the second modeling unit acquires a depth value of the designated binocular endoscope image frame by adopting an online self-supervision learning depth estimation method based on the binocular endoscope; the binocular depth estimation network used by the online self-supervision learning depth estimation method has the capability of fast overlearning, and can continuously adapt to new scenes by utilizing self-supervision information;
In the real-time reconstruction mode, the second modeling unit is specifically configured to perform fitting on the continuous video frames to obtain depth values of the designated binocular endoscope image frames, and includes:
the extraction subunit is used for acquiring binocular endoscope images, and extracting multi-scale features of the current frame image by adopting an encoder network of the current binocular depth estimation network;
the fusion subunit is used for fusing the multi-scale features by adopting a decoder network of the current binocular depth estimation network to acquire the parallax of each pixel point in the current frame image;
the conversion subunit is used for converting parallax into depth according to the internal and external parameters of the camera and outputting the depth as a result of the current frame image;
and the first estimation subunit is used for updating parameters of the current binocular depth estimation network by using self-supervision loss under the condition of not introducing an external true value, and is used for depth estimation of the next frame of image.
Preferably, in the accurate measurement mode, the second modeling unit is specifically configured to perform fitting on the key image video frame, including:
and the second estimation subunit is used for updating parameters of the binocular depth estimation network until convergence by utilizing self-supervision loss corresponding to the appointed binocular endoscope image frame according to the binocular depth estimation network acquired in the real-time reconstruction mode by the last frame image of the appointed binocular endoscope image frame under the condition of not introducing an external true value, and using the converged binocular depth estimation network for accurate depth estimation of the appointed binocular endoscope image frame to acquire the depth value of the appointed binocular endoscope image frame.
(III) beneficial effects
The invention provides an intraoperative residual organ volume estimation system oriented to surgical planning assistance. Compared with the prior art, the method has the following beneficial effects:
according to the invention, through the matching relation between the preoperative tissue grid model and the intraoperative tissue grid model, the measurement of the volume of the residual organ is converted into the volume measurement of the corresponding area of the preoperative tissue grid model, so that the interference of complex deformation of tissues and invisible areas on volume prediction is avoided; and in addition, interaction with a doctor in the operation is considered, simple labels of the doctor aiming at the region of interest are received, accurate volume measurement information is obtained, and active selectivity and high reference in the operation are realized.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a system for estimating a volume of a remaining organ during surgery for assistance in planning a surgery according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an organ volume corresponding to a region to be resected according to an embodiment of the present application;
FIG. 3 is a schematic diagram of volume calculation of an irregular pentahedron according to an embodiment of the present application;
fig. 4 is a schematic diagram of a technical framework of an online self-supervised learning depth estimation method based on a binocular endoscope according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application are clearly and completely described, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application solves the technical problem of inaccurate measurement results in operation by providing the intraoperative residual organ volume estimation system oriented to the assistance of operation planning.
The technical scheme in the embodiment of the application aims to solve the technical problems, and the overall thought is as follows:
the embodiment of the application is mainly applied to, but not limited to, surgical endoscope scenes such as laparoscopic surgical scenes. As the background technology is adopted, the existing scheme can only be verified after operation, and the volume of the resected organ can not be measured in real time during operation, so that a doctor can be prompted and guided; or rely on expensive equipment and high-level doctor manipulations in the operation, and are not suitable for large-scale popularization. The embodiment of the application provides an intraoperative residual organ volume estimation system oriented to the assistance of surgical planning, which is based on a rapid, noninvasive and low-cost algorithm.
In the field of view of endoscopic surgery, the physician can only see the surface of the tissue, and the information such as the position of blood vessels and focal areas inside the tissue depends on the experience of the physician. The CT/MRI preoperative reconstruction model is provided with blood vessel and focus area information in the tissue, the non-rigid registration fusion algorithm can register the preoperative tissue grid model into the intraoperative tissue grid model, and presents the tissue internal information to a doctor by means of a conventional display technology, so that the doctor is assisted in making clinical decisions, and the operation efficiency is improved while the operation risk is reduced. The method is independent of extra equipment, and the measurement of the volume of the residual organ is converted into the volume measurement of the corresponding area of the preoperative tissue grid model through the matching relation between the preoperative tissue grid model and the intraoperative tissue grid model, so that the interference of complex deformation of the tissue and the invisible area on the volume prediction is avoided
And through simple interaction with doctors in the operation, the volume of the organ to be cut can be measured in time, important information such as the organ resection range, the functional area reservation and the like can be determined, the doctor can better adjust the operation scheme, evaluate the operation risk and forecast the operation effect, and meanwhile, the safety and the effectiveness of the operation can be guaranteed.
In addition, an intra-operative tissue mesh model may be acquired from depth values of the designated binocular endoscopic image frames. The depth value of the appointed binocular endoscope image frame can be obtained by adopting an online self-supervision learning depth estimation method based on the binocular endoscope; the binocular depth estimation network used by the online self-supervision learning depth estimation method has the capability of fast overlearning, and can continuously adapt to new scenes by utilizing self-supervision information. The on-line self-supervision learning depth estimation method also provides two modes, namely a real-time reconstruction mode and an accurate measurement mode, for determining the depth value of the appointed binocular endoscope image frame.
The depth estimation of the dual-mode switching can provide real-time point cloud of an anatomical structure in operation, assist a doctor to intuitively understand a three-dimensional structure in operation, and can realize high-precision reconstruction of binocular endoscope image frames appointed by the doctor based on single-frame overfitting, so that a foundation is provided for subsequent processing, and the speed and the precision are considered in application.
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments.
Examples:
As shown in FIG. 1, an embodiment of the present invention provides
An intraoperative residual organ volume estimation system for surgical planning assistance, comprising:
the registration module is used for registering the preoperative tissue grid model and the intraoperative tissue grid model to obtain an integral tissue grid model which displays the internal tissue information of the preoperative tissue grid model in the intraoperative tissue grid model;
the tissue grid model in the operation is obtained according to the depth value of the appointed binocular endoscope image frame;
the first acquisition module is used for receiving a region to be excised marked on the region of interest of the appointed binocular endoscope image frame by a doctor and acquiring a corresponding top point set of all pixel points in the region to be excised in the whole tissue grid model;
the second acquisition module is used for acquiring and visualizing the corresponding vertex set on the preoperative tissue grid model according to the corresponding vertex set in the whole tissue grid model;
the solving module is used for receiving the cutting direction and the manual candidate region marked on the preoperative tissue grid model by a doctor, acquiring the local volume of the organ corresponding to the region to be excised by combining the corresponding vertex set on the preoperative tissue grid model, and finally acquiring the volume or the volume percentage of the residual organ.
According to the embodiment of the invention, extra equipment is not relied on, and the measurement of the volume of the residual organ is converted into the volume measurement of the corresponding area of the preoperative tissue grid model through the matching relation between the preoperative tissue grid model and the intraoperative tissue grid model, so that the interference of complex deformation of tissues and invisible areas on volume prediction is avoided; and in addition, interaction with a doctor in the operation is considered, simple labels of the doctor aiming at the region of interest are received, accurate volume measurement information is obtained, and active selectivity and high reference in the operation are realized.
The following will describe each component module of the above technical solution in detail:
and the registration module is used for registering the preoperative tissue grid model and the intraoperative tissue grid model, and acquiring an integral tissue grid model which displays the internal tissue information of the preoperative tissue grid model in the intraoperative tissue grid model.
The registration module comprises a first modeling unit, a second modeling unit, a feature extraction unit, an overlap prediction unit, a global fusion unit and a precision fine adjustment unit. Specific:
for the first modeling unit, it is used for obtaining the preoperative tissue grid model with tissue semantic information.
The unit reconstructs CT/MRI tissue by using 3D slice software to obtain a three-dimensional grid model, and then uses deep Lab deep learning algorithm or manual segmentation to divide blood vessel, liver and other tissues to finally form preoperative tissue grid model M with tissue semantic information pre =(V pre ,E pre), wherein Vpre Representing the three-dimensional coordinates of the vertices of the model, E pre Representing edges between vertices.
For the second modeling unit, it is used for obtaining the intraoperative tissue grid model according to the depth value of the appointed binocular endoscope image frame.
Illustratively, the present unit employs an online self-supervised learning depth estimation (see specifically below) based on binocular endoscopes to estimate the depth value D of the pixel point; and calculating the space coordinates of the pixel points under a camera coordinate system through a pinhole camera model, wherein the formula is as follows
z=D
Wherein D is the depth estimation value of the pixel point; x, y and z respectively represent an x coordinate, a y coordinate and a z coordinate under a camera coordinate system;
c x ,c y ,f x ,f y is the matrix of the left or right eye endoscope and the camera internal reference in the binocular endoscopeCorresponding parameters of the picture are converted into point cloud V in ={v in,a |a=1,2,…N 1}, wherein vin,a Representing the spatial coordinates of the a-th pixel point;
finally, delaunay triangulation is used for generating point cloud V in Is adjacent to edge E of (a) in Finally, an intraoperative tissue grid model M is formed in =(V in ,E in )。
And the feature extraction unit is used for respectively acquiring corresponding multi-level features according to the preoperative tissue grid model and the intraoperative tissue grid model.
Specifically, the feature extraction unit adopts chebyshev spectrogram convolution to extract multi-level features of the preoperative tissue grid model and the intraoperative tissue grid model:
Wherein, a preoperative tissue grid model M is defined pre =(V pre ,E pre ),V pre Representing three-dimensional coordinates of vertices of a preoperative tissue mesh model, E pre Representing edges between vertices of the preoperative tissue mesh model; intraoperative tissue mesh model M in =(V in ,E in ),V in Representing three-dimensional coordinates of vertices of the preoperative tissue mesh model, E in Representing edges between vertices of the intra-operative tissue mesh model;
and />Downsampling scale features of the n+1st and nth layers, respectively, of a preoperative tissue modelSign, initialize->Is V (V) pre ;/> and />Respectively representing the characteristics of the n+1st layer and the n th layer of the intraoperative tissue model, initializing +.>Is V (V) in
B-order chebyshev polynomials calculated from the respective vertices and their B-ring neighbors, respectively,/->Respectively by edge E in ,E pre Calculated scaled Laplace matrix, +.>Is a learning parameter of the neural network.
And the overlapping prediction unit is used for acquiring the overlapping region of the preoperative tissue grid model and the intraoperative tissue grid model according to the multi-level characteristics and acquiring the pose transformation relation of the vertex of the preoperative tissue grid model in the overlapping region.
Specifically, the overlap prediction unit is configured to:
acquiring the overlapping region of the preoperative tissue grid model and the intra-operative tissue grid model by adopting an attention mechanism comprises the following steps:
wherein ,Opre Representing a preoperative tissue mesh model M pre Masking of the overlapping region; o (O) in Tissue mesh model M in representation in Masking of the overlapping region; cross and self represent self-attrition and cross-attrition operations, respectively; and />M-th-level downsampling scale features of vertexes of the preoperative tissue grid model and the intraoperative tissue grid model are respectively represented;
according to mask O pre and Oin Acquiring vertices each in the overlapping regionAnd its characteristicsAnd calculates an arithmetic front organization grid model M by using a multi-layer perceptron MLP pre Vertex of->Corresponding points of (3):
wherein ,is an intraoperative tissue grid model M in Corresponds to the vertex of the preoperative tissue mesh model M pre Vertex of-> Representing cosine similarity calculation,/->Representing performing position coding operation on vertexes of the intraoperative tissue grid model in an overlapping region;
establishing vertices using nearest neighbor search KNNAdopts singular value decomposition SVD to solve a rotation matrix, and has the following formula:
wherein ,representing vertex->Is a rotation matrix of (a); />Representing the construction of vertices using KNN algorithmIs a local neighborhood of (a); />Is the vertex of the preoperative tissue mesh model +.>Is (are) neighborhood points->Is corresponding to the neighborhood point->Vertex of the intra-operative tissue mesh model;
Using a rotation matrixChanging the point cloud coordinates to get->Predicting vertex +.>The formula is as follows:
wherein ,the pre-operative tissue mesh model is in the displacement vector of the vertices of the overlapping region.
And the global fusion unit is used for acquiring all vertex coordinates of the preoperative tissue grid model after registration according to the transformation relation between the coordinates and the pose of the vertices of the preoperative tissue grid model in the overlapping area and the coordinates of the vertices of the preoperative tissue grid model in the non-overlapping area.
Specifically, the global fusion unit is configured to:
rotational matrix and displacement vectors of all vertices of the MLP regression preoperative tissue mesh model are used:
wherein ,Rpre ,t pre Respectively representing a rotation matrix and a displacement vector of all vertexes of the preoperative tissue grid model;representing the vertex in the overlapping region +.>All vertices v of the mesh model with preoperative tissue pre Wherein all vertices include vertices in the overlapping region and vertices in the non-overlapping region;
wherein ,representing all vertex coordinates of the pre-operative tissue mesh model after registration.
Accordingly, it can be clarified that the embodiment of the invention provides a multimode fusion network based on grid data, the overlap area and the displacement field thereof are predicted by the overlap prediction unit, and the non-rigid deformation of the preoperative tissue grid model is restrained by combining the corigine invariant, so that the model after multimode fusion is more reasonable, and the multimode fusion error is reduced.
And the information display unit is used for acquiring an integral tissue grid model which displays the internal tissue information of the preoperative tissue grid model in the operative tissue grid model according to all vertex coordinates of the preoperative tissue grid model after registration.
By way of example, the VR glasses can be adopted in the unit to uniformly display the two registered three-dimensional models in a coordinate system, or the registered preoperative tissue grid model can be superimposed in the endoscope image according to the basic principle of camera imaging, and the two selectable display means can both realize presenting of tissue internal information to doctors, so that the doctors can be assisted in making clinical decisions, and the surgical efficiency is improved while the surgical risk is reduced.
And for the precision fine tuning unit, the precision fine tuning unit is used for introducing an unsupervised loss fine tuning network to assist the global fusion unit to acquire all vertex coordinates of the preoperative tissue grid model after registration.
The precision fine tuning unit is introduced because when the embodiment of the invention considers registering the appointed binocular endoscope image frames, the reconstructed intraoperative tissue grid model has differences compared with the data set due to the differences of the endoscope light and the patient individuality, the differences can cause the reduction of registration precision, and the registration precision can be improved by using an unsupervised loss fine tuning network.
The unsupervised loss fine tuning network needs to construct the following unsupervised loss function in the application process:
wherein, loss u Representing an unsupervised loss function;
β uu respectively represent the non-supervision loss term coefficients, and />Vertex coordinates after registration of preoperative tissue grid model during unsupervised training are adopted, and the vertex coordinates are +.>Vertices representing pre-operative tissue mesh model after distance registration in the intra-operative tissue mesh model +.>Is (are) nearest points of-> Representing vertex-> and />European distance,/, of->Vertices v representing a distance intra-operative tissue mesh model in a registered pre-operative tissue mesh model in, Is the closest point of (a) to (b),representing vertex v in, And vertex->Is a Euclidean distance of (2);
N 1 representing a preoperative tissue mesh model M pre Number of vertices, N 2 Tissue mesh model M in representation in The number of vertices of (a);
ke Xige forest invariant, < ->The length of the arc distance between two points of the constraint surface is unchanged, +.>Constraining tissue surface area unchanged ++>The volume of the constraint tissue is unchanged.
The embodiment of the invention constructs an unsupervised fine tuning mechanism taking the bidirectional nearest neighbor as a loss function, and realizes the accurate fusion of the preoperative combined grid model and the intraoperative tissue grid model under the appointed binocular endoscope image frame.
It should be noted that, compared with the virtual registration data set constructed by the biomechanical model in the prior art, the embodiment of the invention constructs the data set by using the real endoscopic image and the medical inspection data aiming at the characteristics of the in-vivo flexible dynamic environment, and the accuracy of network registration trained by the data set is higher.
Specifically, in a training stage of the registration module, generating a training set based on real data includes:
according to the characteristic point pair between the appointed binocular endoscope image frame and the preoperative tissue grid model, registering the preoperative tissue grid model and the intraoperative tissue grid model by adopting a non-rigid algorithm based on the characteristic points, wherein for any characteristic point, the method comprises the following steps:
where Non-rib ICP represents the Non-rigid registration algorithm ICP,representing the a-th feature point of the preoperative tissue grid model for non-rigid registration, +.>Correspond to->Feature points of the intra-operative tissue grid model, T G T is the integral transfer matrix of the preoperative tissue grid model l,a Is of the characteristic point v pre,a Is a local deformation transfer matrix of (a);
obtaining a local deformation transfer matrix T of all vertexes in the preoperative tissue grid model by four-element interpolation l Obtaining vertex v in preoperative tissue grid model through transformation relation pre Registered coordinate label
Correspondingly, in the training stage of the registration module, the following supervised loss function needs to be constructed:
wherein, loss s Representing a supervised loss function for the training phase;
β s 、γ s respectively representing supervised loss term coefficients;
N 1 representing a preoperative combined mesh model M pre The number of vertices of (a);
Indicating the loss of true value of l2 based on manually annotated data set,/->Representing all vertex coordinates of the pre-operative tissue mesh model after registration;
I c +II c +III c represents Ke Xige forest invariants for restraining the degree of tissue deformation in vivo, I c The length of the arc distance between two points of the constraint surface is unchanged, II c Constraint tissue surface area is unchanged, III c The volume of the constraint tissue is unchanged.
And the first acquisition module is used for receiving an area to be excised, which is marked on the interested area of the appointed binocular endoscope image frame by a doctor, and acquiring a corresponding vertex set of all pixel points in the area to be excised in the whole tissue grid model.
Defining an intraoperative tissue grid model M of all pixel points in the region to be resected in The set of the top points is P s The method comprises the steps of carrying out a first treatment on the surface of the All pixel points in the region to be resected are in the whole tissue grid model M trans The corresponding vertex set in (2) is P trans Wherein any vertex is p trans The method comprises the steps of carrying out a first treatment on the surface of the The preoperative tissue grid model M pre The corresponding set of vertices on is P ct Wherein any vertex is p ct
The first acquisition module is specificFor according to P s Acquiring M by adopting nearest neighbor algorithm trans P on trans Any vertex p of trans
It will be understood that the second modeling unit of the registration module calculates the spatial coordinates of the pixels under the camera coordinate system according to the depth values of all the pixels in the region to be cut and through the pinhole camera model, thereby obtaining the vertex set P s
For a second acquisition module, acquiring and visualizing a corresponding set of vertices on the preoperative tissue mesh model according to the corresponding set of vertices in the overall tissue mesh model;
the second acquisition module is used for acquiring R according to the registration module pre 、t pre And p acquired by the first acquisition module trans Acquire and visualize M pre P on ct Any vertex p of ct
p ct =R pre p trans +t pre
And the solving module is used for receiving the cutting direction and the manual candidate region marked on the preoperative tissue grid model by a doctor, acquiring the local volume of the organ corresponding to the region to be resected by combining the corresponding vertex set on the preoperative tissue grid model, and finally acquiring the volume or the volume percentage of the residual organ.
Specifically, the solving module is specifically configured to:
as shown in FIG. 2, traverse P ct Vertex p of (b) ct Along the cutting direction v pro Acquiring the cutting direction v and the manual candidate region pro A plurality of nearest vertexes, and obtaining a corresponding vertex p 'by averaging the plurality of vertexes' ct Form a corresponding set of vertices P' ct
According to the set of vertices P ct and P′ct Pairing spatial points p ct and p′ct Generating a plurality of irregular pentahedrons in the region S to be resected;
calculating the volume of each irregular pentahedron and summing to obtain Taking the local volume V of the organ corresponding to the region to be resected cut Combined with the whole volume V of the preoperative organ all Finally, the volume V of the residual organ is obtained remain =V all -V cut Or the volume percentage of the remaining organ
Wherein, as shown in FIG. 3, arbitrary pentahedron A 1 A 2 A 3 B 1 B 2 B 3 The calculation steps of the volume of (2) are as follows:
along with B 1 B 2 、B 2 B 3 Parallel direction as auxiliary line B 1 ′A 2 、A 2 B 3 ' dividing the irregular pentahedron into triangular prisms B 1 B 2 B 3 B 1 ′A 2 B 3 ' and rectangular pyramid A 1 A 3 B 3 ′B 1 ′A 2
Calculation plane B 1 B 2 B 3 Area A 2 And plane B 1 B 2 B 3 Distance h of perpendicular to (2) prism Obtaining triangular prism B 1 B 2 B 3 B 1 ′A 2 B 3 ' volume:
calculation plane A 1 A 3 B 3 ′B 1 ' area, A 2 With plane A 1 A 3 B 3 B 1 Distance h of perpendicular to (2) pyramid Obtaining a rectangular pyramid A 1 A 3 B 3 ′B 1 ′A 2 Volume:
volume V of each pentahedron final The method comprises the following steps:
V final =V prism +V pyramid
furthermore, in addition to the above mentioned factors that may affect the fusion accuracy, how the depth values of the specified binocular endoscopic image frames are acquired by the second modeling unit is also one of the key factors, as this directly affects the accuracy of the intra-operative tissue mesh model.
As described above, the second modeling unit acquires the depth value of the specified binocular endoscope image frame by adopting an online self-supervision learning depth estimation method based on binocular endoscope; the binocular depth estimation network used by the online self-supervision learning depth estimation method has the capability of fast overlearning, and can continuously adapt to new scenes by utilizing self-supervision information;
In the real-time reconstruction mode, the second modeling unit is specifically configured to perform fitting on the continuous video frames to obtain depth values of the designated binocular endoscope image frames, and includes:
the extraction subunit is used for acquiring binocular endoscope images, and extracting multi-scale features of the current frame image by adopting an encoder network of the current binocular depth estimation network;
the fusion subunit is used for fusing the multi-scale features by adopting a decoder network of the current binocular depth estimation network to acquire the parallax of each pixel point in the current frame image;
the conversion subunit is used for converting parallax into depth according to the internal and external parameters of the camera and outputting the depth as a result of the current frame image;
and the first estimation subunit is used for updating parameters of the current binocular depth estimation network by using self-supervision loss under the condition of not introducing an external true value, and is used for depth estimation of the next frame of image.
The depth estimation scheme utilizes the similarity of continuous frames, expands the overfitting thought on a pair of binocular images to overfitting on a time sequence, and can obtain high-precision tissue depth under various binocular endoscopic surgery environments by continuously updating model parameters through online learning.
The pre-training stage of the binocular depth estimation network discards the traditional training mode, adopts the idea of meta-learning, and enables the network to learn one image to predict the depth of the other image, so that the calculation loss is used for updating the network, the generalization of the network to a new scene and the robustness to low-texture complex illumination can be effectively promoted, and meanwhile, the time required by subsequent overfitting is greatly reduced.
As shown in part b of fig. 4, training and obtaining initial model parameters corresponding to the binocular depth estimation network by a meta-learning mode specifically includes:
s100, randomly selecting an even pair of binocular images { e } 1 ,e 2 ,…,e 2K And split equally into support setsAnd a query set and />The images in (a) are randomly paired to form K tasks->
S200, internal circulation training: according toThe support set image calculation loss in the process is updated for one time;
wherein ,after the representation inner loop is updatedNetwork parameters of (a); />Meaning derivative, alpha is learning rate of internal circulation, < ->Support set image for kth task, +.>Based on initial parameters phi of the model m The calculated loss; f represents a binocular depth estimation network;
s300, training in an outer circulation mode: according toIn the query set image, the model initial parameter phi is directly updated by utilizing the updated model calculation element learning loss m Is phi m+1
Wherein, beta is the learning rate of the external circulation;is the query set image of the kth task, < +.>Learning the loss for the element.
The following is a detailed description of the respective sub-units included in the second modeling unit:
for the extraction subunit, as shown in part a of fig. 4, it acquires binocular endoscopic images, and extracts multi-scale features of the current frame image using the encoder network of the current binocular depth estimation network.
Illustratively, the encoder of the binocular depth estimation network in this subunit employs a ResNet18 network for extracting 5 scale feature maps for the current frame image (left and right eye), respectively.
For the fusion subunit, as shown in part a in fig. 4, a decoder network of a current binocular depth estimation network is adopted to fuse the multi-scale features, so as to obtain the parallax of each pixel point in the current frame image; the method specifically comprises the following steps:
the decoder network is adopted to splice the coarse-scale feature map with the fine-scale feature map through a convolution block and up-sampling, and feature fusion is carried out through the convolution block again, wherein the convolution block is constructed by combining a reflection filling layer (reflection padding), a convolution layer and a nonlinear activation subunit ELU;
directly calculating parallax according to the highest output of the network resolution:
d=k·(sigmoid(conv(Y))-TH)
Wherein d represents the parallax estimation value of the pixel point; k is a preset maximum parallax range, and Y is the output with the highest resolution; TH denotes a parameter related to the type of binocular endoscope, 0.5 when there is a negative parallax in the endoscopic image, and 0 when the endoscopic images are both positive parallax; conv is the convolutional layer; sigmoid performs range normalization.
For the conversion subunit, it converts the parallax into depth according to the parameters inside and outside the camera and outputs the depth as the result of the current frame image
The conversion of parallax into depth in this subunit means:
wherein ,cx1Left-eye endoscope, right-eye endoscope and camera internal reference matrix in binocular endoscopeCorresponding parameters of (a); if f x Taking the corresponding internal reference of the left eye camera>D is the parallax estimated value of the left-eye pixel point, and D is the depth estimated value of the left-eye pixel point; if f x Taking the corresponding internal reference of the right eye camera>D, taking the parallax estimation value of the right-eye pixel point, wherein D is the depth estimation value of the right-eye pixel point; b is the base line length, i.e. binocular camera external parameters.
For the first estimation unit, as shown in part b of fig. 4, it updates the parameters of the current binocular depth estimation network with self-supervised loss for depth estimation of the next frame image without introducing external truth values.
It should be understood that reference to "external truth" in the embodiments of the present invention is a label (or "supervisory information"), which is well known in the art.
In this subunit, as shown in part b of fig. 4, the self-supervision loss is expressed as:
wherein ,Lself Representing self-supervision loss; alpha 1 、α 2 、α 3 、α 4 All are super parameters, l corresponds to the left graph, and r corresponds to the right graph.
Since the same scene is observed in binocular, corresponding pixel points on the left and right depth maps are equal in value transformed to the same coordinate system, and the two points are introduced and />
(1)Loss of geometric consistency representing left graph:
wherein ,P1 Representing a first set of active pixel points (i.e., right-eye active pixel points);representing left eye depth obtained by converting effective pixel point p from right eye depth map through camera pose, D l 'p' denotes a right-view disparity Dis to be predicted by the effective pixel p R The obtained left eye depth is upsampled on the left eye depth map.
(2)Geometric consistency loss representing right graph:
wherein ,P2 Representing a second set of valid pixel points (i.e., left-eye valid pixel points);representing right eye depth obtained by converting effective pixel point p from left eye depth map through camera pose, D r 'p' denotes a left-view disparity Dis to be predicted by the effective pixel p L The obtained right eye depth is upsampled on the right eye depth map.
The geometric consistency constraint is added in the training loss to ensure the general usability of the network for hardware, and the autonomous adaptation to the irregular binocular images such as the operation endoscope and the like is realized.
Assuming constant brightness and smooth space in the endoscopic surgery, another purpose reconstruction can be realized by utilizing the re-projection between the left and right eye pictures, meanwhile, the structural similarity loss is increased, the brightness, the contrast and the structure of the two images are normalized and compared, and the method is introduced and />
(3)Luminosity loss representing left plot:
wherein ,IL (p) represents a left diagram, I L ' (p) indicates a left-view parallax Dis using the right-view and prediction L (p) left-eye endoscopic reconstructed image produced, λ i and λs To balance parameters, SSIM LL′ (p) represents I L (p) and I L The structural similarity of the images of' (p);
(4)luminosity loss representing right plot:
wherein ,IR (p) represents the right figure, I' R (p) represents a right-view parallax Dis using left-view and prediction R (p) Right-eye endoscopic reconstruction image generated, SSIM RR′ (p) represents I R (p) and I' R Image structural similarity of (p).
In the tissue region with low texture and single color, smooth priori aided reasoning is adopted and depth is regularized, and introduction is carried out and />
(5)Smoothing loss representing left graph:
wherein ,representing a normalized left eye depth map, +.> and />Representing the first derivatives along the horizontal and vertical directions of the image;
(6)smoothing loss representing right graph: />
wherein ,representing a normalized right eye depth map, +.> and />Representing the first derivative along the horizontal and vertical directions of the image.
In particular, the first set of active pixel points P 1 And a second set of valid pixel points P 2 Acquisition procedures of (e.g.)The following steps:
defining the left visual difference predicted by the current binocular depth estimation network asRight visual difference is +.>The formulaic expression of the left-eye and right-eye cross-validation masks is as follows:
wherein ,the method is used for judging whether pixels at the (i, j) positions in the left and right eye images are in a stereo matching range or not respectively; the value range of i is [1, W]All integers in the interval; the value range of j is [1, H ]]All integers in the interval; w represents the image width, H represents the image height;
let c take L or R, whenWhen the pixel representing the (i, j) position under the current calculation method is in the stereo matching range, otherwise, the pixel is not in the stereo matching range;
projecting by using a pinhole camera model, binocular pose transformation and predicted depth to obtain an effective area mask based on 3d pointsTaking 0 or 1, when- >When the pixel representing the (i, j) position under the current calculation method is in the stereo matching range, whether or notThen not within the stereo matching range;
obtaining a final effective area mask
If the pixel point p satisfiesWhen c takes R, a first effective pixel point set P is obtained 1 When c takes L, a second effective pixel point set P is obtained 2
In the corrected stereoscopic image, a matching pixel cannot be found due to an additional area caused by the viewing angle shift. However, embodiments of the present invention contemplate that low texture and non-uniformity of illumination of tissue in the body may result in less pronounced local features, and pixels within these inactive areas tend to find similar pixels in adjacent areas. Therefore, as described above, the embodiment of the invention proposes a binocular effective area recognition algorithm based on cross validation, eliminates misguidance of self-supervision loss of pixels in an ineffective area on network learning, and improves the accuracy of depth estimation.
In addition, in order to avoid the lack of depth estimation robustness in pure texture or low illumination scenes, the method also introduces
(7)Representing sparse optical flow loss:
/>
wherein ,DisL (p) represents a predicted left-eye disparity map, OF L (p) represents left eye weaknessParallax map, dis r (p) represents a predicted Right-eye disparity map, OF R (p) represents a right-eye sparse disparity map; p (P) 3 OF representing left-eye sparse disparity map L A third set of active pixels in (p); p (P) 4 OF representing right-eye sparse disparity map R A fourth set of active pixels in (p); gamma ray 1 and γ2 Are balance parameters, are all non-negative numbers and are not taken to be 0 at the same time.
In particular, the third effective pixel point set P 3 And a fourth set of valid pixel points P 4 The acquisition process of (a) is as follows:
calculating sparse optical flow (delta x, delta y) at intervals of n pixels in the row-column direction by using an LK (Lucas-Kanade) optical flow solving algorithm, wherein delta x represents the offset of the pixel point in the horizontal direction, and delta y represents the offset of the pixel point in the vertical direction;
when solving the optical flow from left to right, only whenAnd Deltax>thred 1 Preserving the parallax of the pixel position as Deltax, wherein KT and threaded 1 For the corresponding preset threshold value, the parallax position 0 OF the sparse optical flow position is not met or calculated to obtain a final sparse parallax image OF L (p),OF L The pixels (P) noteq0 form a third effective pixel set P 3
When solving the optical flow from right to left, only whenAnd Deltax<thred 2 The parallax of the pixel position is kept as deltax, wherein three 2 For the corresponding preset threshold value, the parallax position 0 OF the sparse optical flow position is not met or calculated to obtain a final sparse parallax image OF R (p),OF R The pixels (P) noteq0 form a fourth effective pixel set P 4
As the embodiment of the invention introduces the traditional Lucas-Kanade optical flow to deduce the sparse parallax between binocular images, gives a reasonable learning direction to a network, improves the quick learning capability and reduces the probability of sinking into local optimum.
It is particularly emphasized that, in addition to the real-time reconstruction mode, the online self-supervised learning depth estimation method adopted by the second modeling unit in the embodiment of the present invention further sets an accurate measurement mode. As shown in part b of fig. 4, in the accurate measurement mode, the second modeling unit is specifically configured to perform fitting on the key image video frame, including:
and the second estimation subunit is used for updating parameters of the binocular depth estimation network until convergence by utilizing self-supervision loss corresponding to the appointed binocular endoscope image frame according to the binocular depth estimation network acquired in the real-time reconstruction mode by the last frame image of the appointed binocular endoscope image frame under the condition of not introducing an external true value, and using the converged binocular depth estimation network for accurate depth estimation of the appointed binocular endoscope image frame to acquire the depth value of the appointed binocular endoscope image frame.
It is noted that the technical details of the depth estimation network, the self-supervision loss function, the effective area mask calculation, the meta-learning pre-training mode and the like in the accurate measurement mode are consistent with those of the expansion in the real-time reconstruction mode, and are not repeated here.
In summary, compared with the prior art, the method has the following beneficial effects:
1. according to the embodiment of the invention, extra equipment is not relied on, and the measurement of the volume of the residual organ is converted into the volume measurement of the corresponding area of the preoperative tissue grid model through the matching relation between the preoperative tissue grid model and the intraoperative tissue grid model, so that the interference of complex deformation of tissues and invisible areas on volume prediction is avoided.
2. According to the embodiment of the invention, training data is generated based on real data through a manual labeling and interpolation method, a multimode registration fusion network is trained in a supervised mode, and finally registration accuracy is further improved through non-supervision fine tuning.
3. According to the embodiment of the invention, interaction with a doctor in operation is considered, the simple label of the doctor aiming at the region of interest is received, accurate volume measurement information is obtained, and active selectivity and high reference in operation are realized.
4. The embodiment of the invention discloses an online self-supervision learning depth estimation method based on a binocular endoscope, which at least has the following beneficial effects:
4.1 switching depth estimation, not only can provide real-time point cloud of an intraoperative anatomical structure, assist a doctor to intuitively understand the intraoperative three-dimensional structure, but also can realize high-precision reconstruction of a doctor manually selected key frame based on single-frame overfitting, and provide a basis for subsequent measurement, so that speed and precision are considered in application.
And 4, 2, expanding the overfitting thought on a pair of binocular images to overfitting on a time sequence by utilizing the similarity of continuous frames, and obtaining high-precision tissue depth under various binocular endoscopic surgery environments by continuously updating model parameters through online learning.
4.4, the pre-training stage of the network model discards the traditional training mode, adopts the idea of meta-learning, and enables the network to learn one image to predict the depth of the other image, so that the calculation loss is used for updating the network, the generalization of the network to a new scene and the robustness to low-texture complex illumination can be effectively promoted, and meanwhile, the time required by subsequent overfitting is greatly reduced.
And 4.4, adding geometric consistency constraint into training loss to ensure the general usability of the network to hardware, and realizing autonomous adaptation to irregular binocular images such as surgical endoscopes.
4.5, taking depth estimation of each frame of binocular image as an independent task, and performing real-time fitting to obtain a high-precision model suitable for the current frame; and the new scene can be quickly learned by online learning, so that a high-precision depth estimation result is obtained.
And 4.6, based on a cross-validation binocular effective area recognition algorithm, misleading of self-supervision loss of pixels in an ineffective area to network learning is eliminated, and the accuracy of depth estimation is improved.
4.7, introducing the traditional Lucas-Kanade optical flow to deduce sparse parallax between binocular images, giving a reasonable learning direction to a network, improving the quick learning capability and reducing the probability of sinking into local optimum.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An intraoperative residual organ volume estimation system for surgical planning assistance, comprising:
the registration module is used for registering the preoperative tissue grid model and the intraoperative tissue grid model to obtain an integral tissue grid model which displays the internal tissue information of the preoperative tissue grid model in the intraoperative tissue grid model;
the tissue grid model in the operation is obtained according to the depth value of the appointed binocular endoscope image frame;
the first acquisition module is used for receiving a region to be excised marked on the region of interest of the appointed binocular endoscope image frame by a doctor and acquiring a corresponding top point set of all pixel points in the region to be excised in the whole tissue grid model;
The second acquisition module is used for acquiring and visualizing the corresponding vertex set on the preoperative tissue grid model according to the corresponding vertex set in the whole tissue grid model;
the solving module is used for receiving the cutting direction and the manual candidate region marked on the preoperative tissue grid model by a doctor, acquiring the local volume of the organ corresponding to the region to be excised by combining the corresponding vertex set on the preoperative tissue grid model, and finally acquiring the volume or the volume percentage of the residual organ.
2. The intraoperative residual organ volume estimation system of claim 1, wherein the registration module comprises:
the first modeling unit is used for acquiring a preoperative organization grid model with organization semantic information;
the second modeling unit is used for acquiring an intraoperative tissue grid model according to the depth value of the appointed binocular endoscope image frame;
the feature extraction unit is used for respectively acquiring corresponding multi-level features according to the preoperative tissue grid model and the intraoperative tissue grid model;
the overlap prediction unit is used for acquiring an overlap region of the preoperative tissue grid model and the intraoperative tissue grid model according to the multi-level characteristics, and acquiring a pose transformation relationship of the vertex of the preoperative tissue grid model in the overlap region;
The global fusion unit is used for acquiring all vertex coordinates of the preoperative tissue grid model after registration according to the transformation relation between the coordinates and the pose of the vertices of the preoperative tissue grid model in the overlapping area and the coordinates of the vertices of the preoperative tissue grid model in the non-overlapping area;
and the information display unit is used for acquiring an integral tissue grid model which displays the internal tissue information of the preoperative tissue grid model in the operative tissue grid model according to all the vertex coordinates of the preoperative tissue grid model after registration.
3. The intraoperative residual organ volume estimation system of claim 2,
the characteristic extraction unit adopts chebyshev spectrogram convolution to extract multi-level characteristics of the preoperative tissue grid model and the intraoperative tissue grid model:
wherein, a preoperative tissue grid model M is defined pre =(V pre ,E pre ),V pre Representing three-dimensional coordinates of vertices of a preoperative tissue mesh model, E pre Representing edges between vertices of the preoperative tissue mesh model; intraoperative tissue mesh model M in =(V in ,E in ),V in Representing three-dimensional coordinates of vertices of the preoperative tissue mesh model, E in Representing edges between vertices of the intra-operative tissue mesh model;
and />Initializing downsampled scale features representing an n+1th layer and an n-th layer, respectively, of a preoperative tissue model Is V (V) pre ;/> and />Representation of the respectiveLayer n+1 and layer n features of the medium tissue model, initialize +.>Is V (V) in
B-order chebyshev polynomials calculated from the respective vertices and their B-ring neighbors, respectively,/->Respectively by edge E in ,E pre Calculated scaled Laplace matrix, +.>Is a learning parameter of the neural network;
and/or the overlap prediction unit is specifically configured to:
acquiring the overlapping region of the preoperative tissue grid model and the intra-operative tissue grid model by adopting an attention mechanism comprises the following steps:
wherein ,Opre Representing a preoperative tissue mesh model M pre Masking of the overlapping region; o (O) in Tissue mesh model M in representation in Masking of the overlapping region; cross and self represent self-attrition and cross-attrition operations, respectively;andrepresenting preoperative tissue mesh model and intra-operative tissue mesh, respectivelyDownsampling scale features at the m-th level of the vertices of the lattice model;
according to mask O pre and Oin Acquiring vertices each in the overlapping regionAnd features of->And calculates an arithmetic front organization grid model M by using a multi-layer perceptron MLP pre Vertex of->Corresponding points of (3):
wherein ,is an intraoperative tissue grid model M in Corresponds to the vertex of the preoperative tissue mesh model M pre Vertex in (a) Representing cosine similarity calculation,/->Representing performing position coding operation on vertexes of the intraoperative tissue grid model in an overlapping region;
Establishing vertices using nearest neighbor search KNNAdopts singular value decomposition SVD to solve a rotation matrix, and has the following formula:
wherein ,representing vertex->Is a rotation matrix of (a); />Representing the construction of the vertices belonging to the vertex using the KNN algorithm>Is a local neighborhood of (a); />Is the vertex of the preoperative tissue mesh model +.>Is (are) neighborhood points->Is corresponding to the neighborhood pointVertex of the intra-operative tissue mesh model;
using a rotation matrixChanging the point cloud coordinates to get->Predicting vertex +.>The formula is as follows:
wherein ,displacement vectors of vertices of the preoperative tissue mesh model in the overlapping region and are associated with a rotation matrix +.>Forming the pose transformation relation;
and/or the global fusion unit is specifically configured to:
rotational matrix and displacement vectors of all vertices of the MLP regression preoperative tissue mesh model are used:
wherein ,Rpre ,t pre Respectively representing a rotation matrix and a displacement vector of all vertexes of the preoperative tissue grid model;representing the vertex in the overlapping region +.>All vertices v of the mesh model with preoperative tissue pre Is a weight of distance calculation of (a);
wherein ,representing all vertex coordinates of the pre-operative tissue mesh model after registration.
4. The intraoperative residual organ volume estimation system of claim 1 wherein during a training phase of the intraoperative residual organ volume estimation system, a training set is generated based on real data:
According to the characteristic point pair between the appointed binocular endoscope image frame and the preoperative tissue grid model, registering the preoperative tissue grid model and the intraoperative tissue grid model by adopting a non-rigid algorithm based on the characteristic points, wherein for any characteristic point, the method comprises the following steps:
where Non-rib ICP represents the Non-rigid registration algorithm ICP,representing the a-th feature point of the preoperative tissue grid model for non-rigid registration, +.>Correspond to->Feature points of the intra-operative tissue grid model, T G T is the integral transfer matrix of the preoperative tissue grid model l,a Is of the characteristic point v pre,a Is a local deformation transfer matrix of (a);
obtaining a local deformation transfer matrix T of all vertexes in the preoperative tissue grid model by four-element interpolation l Obtaining vertex v in preoperative tissue grid model through transformation relation pre Registered coordinate label
5. The intraoperative residual organ volume estimation system of claim 4 wherein during a training phase of the intraoperative residual organ volume estimation system, a supervised loss function is constructed as follows:
wherein, loss s Representing a supervised loss function for the training phase;
β s 、γ s respectively representing supervised loss term coefficients;
N 1 representing a preoperative combined mesh model M pre The number of vertices of (a);
representing 12 true value loss based on manually annotated data set,/- >Representing all vertex coordinates of the pre-operative tissue mesh model after registration;
I c +II c +III c represents Ke Xige forest invariants for restraining the degree of tissue deformation in vivo, I c The length of the arc distance between two points of the constraint surface is unchanged, II c Constraint tissue surface area is unchanged, III c The volume of the constraint tissue is unchanged.
6. The intraoperative residual organ volume estimation system of claim 1, wherein the registration module further comprises:
the precision fine tuning unit is used for introducing an unsupervised loss fine tuning network and assisting the global fusion unit to acquire all vertex coordinates of the preoperative tissue grid model after registration;
and/or the unsupervised loss fine tuning network constructs the following unsupervised loss function in the application process:
wherein, loss u Representing an unsupervised loss function;
β u ,γ u respectively represent the non-supervision loss term coefficients, and />Vertex coordinates after registration of preoperative tissue grid model during unsupervised training are adopted, and the vertex coordinates are +.>Vertices representing pre-operative tissue mesh model after distance registration in the intra-operative tissue mesh model +.>Is (are) nearest points of-> Representing vertex-> and />European distance,/, of->Vertices v representing a distance intra-operative tissue mesh model in a registered pre-operative tissue mesh model in,b Is (are) nearest points of-> Representing vertex v in,b And vertex->Is a Euclidean distance of (2);
N 1 representing a preoperative tissue mesh model M pre Number of vertices, N 2 Tissue mesh model M in representation in The number of vertices of (a);
ke Xige forest invariant, < ->The length of the arc distance between two points of the constraint surface is unchanged, +.>Constraining tissue surface area unchanged ++>The volume of the constraint tissue is unchanged.
7. The intraoperative residual organ volume estimation system of claim 3,
defining an intraoperative tissue grid model M of all pixel points in the region to be resected in The set of the top points is P s The method comprises the steps of carrying out a first treatment on the surface of the All pixel points in the region to be resected are in the whole tissue grid model M trans The corresponding vertex set in (2) is P trans Wherein any vertex is p trans The method comprises the steps of carrying out a first treatment on the surface of the The preoperative tissue grid model M pre The corresponding set of vertices on is P ct Wherein any vertex is p ct
The first acquisition module is specifically configured to s Acquiring M by adopting nearest neighbor algorithm trans P on trans Any vertex p of trans
And/or the second acquisition module is used for acquiring the data according to R pre 、t pre and ptrans Acquire and visualize M pre P on ct Any vertex p of ct
p ct =R pre p trans +t pre
8. The intraoperative residual organ volume estimation system of claim 7, wherein the solution module is specifically configured to:
Traversal P ct Vertex p of (b) ct Along the cutting direction v pro Acquiring the cutting direction v and the manual candidate region pro A plurality of nearest vertexes, and obtaining a corresponding vertex p 'by averaging the plurality of vertexes' ct Form a corresponding set of vertices P' ct
According to the set of vertices P ct and P′ct Pairing spatial points p ct and p′ct Generating a plurality of irregular pentahedrons in the region S to be resected;
calculating the volume of each irregular pentahedron and summing to obtain the local volume V of the organ corresponding to the region to be resected cut Combined with the whole volume V of the preoperative organ all Finally, the volume V of the residual organ is obtained remain =V all -V cut Or the volume percentage of the remaining organ
Wherein, any pentahedron A 1 A 2 A 3 B 1 B 2 B 3 The calculation steps of the volume of (2) are as follows:
along with B 1 B 2 、B 2 B 3 Parallel direction auxiliary line B1A 2 、A 2 B′ 3 Dividing the irregular pentahedron into triangular prisms B 1 B 2 B 3 B′ 1 A 2 B′ 3 And rectangular pyramid A 1 A 3 B′ 3 B′ 1 A 2
Calculation plane B 1 B 2 B 3 Area A 2 And plane B 1 B 2 B 3 Distance h of perpendicular to (2) prism Obtaining triangular prism B 1 B 2 B 3 B′ 1 A 2 B′ 3 Volume:
calculation plane A 1 A 3 B′ 3 B′ 1 Area A 2 With plane A 1 A 3 B 3 B 1 Distance h of perpendicular to (2) pyramid Obtaining a rectangular pyramid A 1 A 3 B′ 3 B′ 1 A 2 Volume:
volume V of each pentahedron final The method comprises the following steps:
V final =V prism +V pyramid
9. the intraoperative residual organ volume estimation system according to any one of claim 2 to 8,
the second modeling unit acquires depth values of the appointed binocular endoscope image frames by adopting an online self-supervision learning depth estimation method based on the binocular endoscope; the binocular depth estimation network used by the online self-supervision learning depth estimation method has the capability of fast overlearning, and can continuously adapt to new scenes by utilizing self-supervision information;
In the real-time reconstruction mode, the second modeling unit is specifically configured to perform fitting on the continuous video frames to obtain depth values of the designated binocular endoscope image frames, and includes:
the extraction subunit is used for acquiring binocular endoscope images, and extracting multi-scale features of the current frame image by adopting an encoder network of the current binocular depth estimation network;
the fusion subunit is used for fusing the multi-scale features by adopting a decoder network of the current binocular depth estimation network to acquire the parallax of each pixel point in the current frame image;
the conversion subunit is used for converting parallax into depth according to the internal and external parameters of the camera and outputting the depth as a result of the current frame image;
and the first estimation subunit is used for updating parameters of the current binocular depth estimation network by using self-supervision loss under the condition of not introducing an external true value, and is used for depth estimation of the next frame of image.
10. The intraoperative residual organ volume estimation system of claim 9,
in the accurate measurement mode, the second modeling unit is specifically configured to perform fitting on the key image video frame, and includes:
and the second estimation subunit is used for updating parameters of the binocular depth estimation network until convergence by utilizing self-supervision loss corresponding to the appointed binocular endoscope image frame according to the binocular depth estimation network acquired in the real-time reconstruction mode by the last frame image of the appointed binocular endoscope image frame under the condition of not introducing an external true value, and using the converged binocular depth estimation network for accurate depth estimation of the appointed binocular endoscope image frame to acquire the depth value of the appointed binocular endoscope image frame.
CN202310419428.9A 2023-04-14 2023-04-14 Intraoperative residual organ volume estimation system oriented to operation planning assistance Pending CN116993805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310419428.9A CN116993805A (en) 2023-04-14 2023-04-14 Intraoperative residual organ volume estimation system oriented to operation planning assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310419428.9A CN116993805A (en) 2023-04-14 2023-04-14 Intraoperative residual organ volume estimation system oriented to operation planning assistance

Publications (1)

Publication Number Publication Date
CN116993805A true CN116993805A (en) 2023-11-03

Family

ID=88525431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310419428.9A Pending CN116993805A (en) 2023-04-14 2023-04-14 Intraoperative residual organ volume estimation system oriented to operation planning assistance

Country Status (1)

Country Link
CN (1) CN116993805A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315032A (en) * 2023-11-28 2023-12-29 北京智愈医疗科技有限公司 Tissue offset monitoring method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315032A (en) * 2023-11-28 2023-12-29 北京智愈医疗科技有限公司 Tissue offset monitoring method
CN117315032B (en) * 2023-11-28 2024-03-08 北京智愈医疗科技有限公司 Tissue offset monitoring method

Similar Documents

Publication Publication Date Title
CN110010249B (en) Augmented reality operation navigation method and system based on video superposition and electronic equipment
Mori et al. Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images
JP5153620B2 (en) System for superimposing images related to a continuously guided endoscope
US9155470B2 (en) Method and system for model based fusion on pre-operative computed tomography and intra-operative fluoroscopy using transesophageal echocardiography
CN103356155B (en) Virtual endoscope assisted cavity lesion examination system
US8939892B2 (en) Endoscopic image processing device, method and program
JP4891541B2 (en) Vascular stenosis rate analysis system
US20180174311A1 (en) Method and system for simultaneous scene parsing and model fusion for endoscopic and laparoscopic navigation
Hong et al. 3D reconstruction of virtual colon structures from colonoscopy images
US20130170726A1 (en) Registration of scanned objects obtained from different orientations
US20180189966A1 (en) System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation
US20160228075A1 (en) Image processing device, method and recording medium
WO2010081094A2 (en) A system for registration and information overlay on deformable surfaces from video data
JP5961504B2 (en) Virtual endoscopic image generating apparatus, operating method thereof, and program
JP5934070B2 (en) Virtual endoscopic image generating apparatus, operating method thereof, and program
CN115830016B (en) Medical image registration model training method and equipment
CN108090954A (en) Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning
CN116993805A (en) Intraoperative residual organ volume estimation system oriented to operation planning assistance
CN115953377A (en) Digestive tract ultrasonic endoscope image fusion method and system
CN116485851A (en) Three-dimensional grid model registration fusion system oriented to laparoscopic surgery navigation
US7943892B2 (en) Projection image generation apparatus, method for generating projection image of moving target, and program
CN116421311A (en) Intraoperative dangerous area generation system based on preoperative intraoperative three-dimensional grid fusion
CN116439825A (en) In-vivo three-dimensional information measurement system oriented to auxiliary decision in minimally invasive surgery
CN116421310A (en) Internal safety distance identification system oriented to minimally invasive anatomical path constraint
Liu et al. Pose estimation via structure-depth information from monocular endoscopy images sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination