CN115919464B - Tumor positioning method, system, device and tumor development prediction method - Google Patents

Tumor positioning method, system, device and tumor development prediction method Download PDF

Info

Publication number
CN115919464B
CN115919464B CN202310188753.9A CN202310188753A CN115919464B CN 115919464 B CN115919464 B CN 115919464B CN 202310188753 A CN202310188753 A CN 202310188753A CN 115919464 B CN115919464 B CN 115919464B
Authority
CN
China
Prior art keywords
tumor
coordinate
data
coordinate system
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310188753.9A
Other languages
Chinese (zh)
Other versions
CN115919464A (en
Inventor
邬君
刘衍瑾
邱建忠
李嘉鑫
彭彪
许树辉
赵炳彦
许崇海
肖光春
陈照强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Ailu Intelligent Technology Co ltd
Qilu University of Technology
Original Assignee
Sichuan Ailu Intelligent Technology Co ltd
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Ailu Intelligent Technology Co ltd, Qilu University of Technology filed Critical Sichuan Ailu Intelligent Technology Co ltd
Priority to CN202310188753.9A priority Critical patent/CN115919464B/en
Publication of CN115919464A publication Critical patent/CN115919464A/en
Application granted granted Critical
Publication of CN115919464B publication Critical patent/CN115919464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application provides a tumor positioning method, and relates to the technical fields of surgical diagnosis and medical navigation. According to the method, the human body surface feature points are adopted, the coordinates in the tomographic image are accurately converted into space coordinates through fusion of the transition image coordinate system and the tomographic image coordinate system, and the space coordinates are used for guiding the position of the executing end of the surgical robot, so that accurate positioning operation of a target area is achieved. The application also provides a system, a device and a related storage medium for implementing the tumor positioning method. Meanwhile, the application also provides a method for predicting tumor development by applying partial results in the tumor positioning system. The number of layers of the patient periodic tomographic images and the comparison of indexes such as areas, contour amplification distances and the like in the tomographic images of each layer and benign tumor big data are adopted to predict the tumor development condition.

Description

Tumor positioning method, system, device and tumor development prediction method
Technical Field
The application relates to the technical field of medical navigation, in particular to a method for accurately positioning tumors based on various image data, and a device, a system and a related storage medium for implementing the positioning method; and also relates to a method for predicting tumor development by using the tumor positioning system.
Background
In recent years, laparoscopic surgery has been widely used in the surgical field due to its advantages of small trauma, rapid recovery, light pain, high cure rate, etc., and has been gradually applied to highly difficult operations such as tumor resection along with the development of technology. However, in the deep tumor resection of the abdominal cavity such as liver tumor, since the laparoscope can only transmit two-dimensional images, depth information such as tumor position inside the organ cannot be obtained. In the traditional operation, the clinical experience of the doctor of the main doctor and the technical judgment of CT data before operation become the main decision basis for making the scheme in the operation, so that the difficult problem of inaccurate positioning of liver cancer caused by subjective deviation of the doctor is unavoidable. If the liver surface cutting edge is not accurately defined, liver failure can be caused by excessive liver resection, and serious consequences such as direct rupture and diffusion of tumor cells caused by incomplete resection of the liver can be caused by the fact that the tumor is directly cut, so that the liver surface cutting edge is particularly important in liver tumor resection.
With the development of medical navigation systems, laparoscopic surgery combined with the medical navigation system is also currently attempted to perform laparoscopic tumor resection. The existing medical navigation system is developed to a frameless positioning stage, adopts an image guiding technology, and can accurately provide positioning information for related equipment by utilizing a spatial three-dimensional positioning principle. Currently, medical navigation systems are used for neurosurgical and orthopedic procedures. However, the deep tumor resection operation of the abdominal cavity, such as liver tumor, especially tumor resection of special parts is limited by the specificity of the position and the complexity of the parts and surrounding vascular structures, and the positioning precision requirement is extremely high.
At present, the liver tumor localization technology is still in an exploration stage. Among them, the intraoperative fluorescence imaging technique has been successfully applied in surgery. The morphology and the position of liver cancer tissues can be displayed by means of corresponding visual equipment through the unique targeting effect of indocyanine green on liver aggregation and detention. However, in the operation of resections mediated by fluorescence imaging techniques, the level of the physician's operation remains critical in determining the success or failure of the resections of the tumor. At present, no report on accurate positioning assistance for a liver tumor operation end exists.
Disclosure of Invention
One of the purposes of the application is to provide a tumor positioning method, which is used for accurately converting coordinates in a tomographic image into space coordinates by adopting characteristic points of the surface of a human body and guiding the position of an execution end of a surgical robot to realize accurate positioning operation on a target area.
The tumor positioning method provided by the application is based on the existing CT guided surgical robot system, utilizes the CT system and a coordinate system arranged in the surgical robot system, and converts the coordinate mapping of the CT system into the coordinate of the surgical robot system by additionally arranging a transition system.
In the existing image guidance process of acquiring real-time images of patients by adopting acquisition equipment such as CT, MRI and the like, the positions of surgical instruments and human bodies are usually calibrated by means of infrared transmitters. The calibration stability and accuracy of infrared rays to the human body determine the operation positioning accuracy of medical navigation guidance. However, it is difficult to maintain absolute stability of the position of the human body during the surgical operation. This small movement will cause a large deviation in the positioning of the high precision surgical instrument guided by the medical navigation system, thereby affecting the safety of the procedure. The body surface marker acquired by the transition system additionally arranged in the application is a characteristic which is easy to identify, such as an organ, a body structure and the like, on the surface of a human body. The body surface markers and the internal tumors have stable position correlation, so that the positioning error caused by the fact that the external markers cannot be adjusted correspondingly to the displacement of the human body is avoided. The tumor position is calibrated by the body surface markers, and the excessive dependence on real-time tomographic images in the current CT guided operation is eliminated on the premise of ensuring the positioning accuracy, so that the radiation dose in the operation is further reduced.
The transition system additionally arranged in the application has body surface image recognition capability (a transition image coordinate system is internally or actively arranged), and the transition image coordinate system can be associated with a tumor image coordinate system (a CT system internal coordinate system) by recognizing a body surface marker adopted by the CT system; meanwhile, the transition system can be also converted with a positioning execution coordinate system (a coordinate system built in the surgical robot system) by binding with the position of the surgical robot system. Therefore, any point in the tumor image coordinate system can be accurately converted into a three-dimensional coordinate in the positioning execution coordinate system through the additionally arranged transition system.
In one embodiment of the present application, the transition system is provided by a binocular camera.
The tumor positioning method provided by the application specifically comprises the following steps:
s1, acquiring the body surface marker in the tumor image coordinate system
Figure SMS_1
Three-dimensional coordinates of (c):
Figure SMS_2
any point of the region of interest
Figure SMS_3
Coordinates: />
Figure SMS_4
The relative positional relationship of any point of the region of interest and the body surface marker is:
Figure SMS_5
s2, acquiring three-dimensional coordinates of the body surface marker in the transition image coordinate system
Figure SMS_6
Calculating three-dimensional coordinates of any point of the region of interest in the transition image coordinate system as +.>
Figure SMS_7
The calculation process is as follows:
s21, by rotating the matrix
Figure SMS_8
Obtaining
Figure SMS_9
Wherein->
Figure SMS_10
Rotating the coordinate system B to obtain a coordinate system in the same direction as the coordinate axis C;
s22. Known
Figure SMS_11
Formula->
Figure SMS_12
Obtain->
Figure SMS_13
I.e. +.>
Figure SMS_14
At->
Figure SMS_15
Coordinates in a coordinate system;
s23, through a vector algorithm, the method is represented by the formula
Figure SMS_16
Obtain->
Figure SMS_17
I.e. +.>
Figure SMS_18
At the position of
Figure SMS_19
Coordinates in a coordinate system;
s24, by rotating the matrix
Figure SMS_20
Obtain->
Figure SMS_21
S25, according to the formula
Figure SMS_22
Obtain->
Figure SMS_23
I.e. +.>
Figure SMS_24
Coordinates in the B coordinate system;
s3, calculating the three-dimensional coordinates of the origin of the transition image coordinate system in the positioning execution coordinate system according to the mapping relation inherent to the spatial position association as
Figure SMS_25
Further calculating the three-dimensional coordinates of any point of the region of interest in the positioning execution coordinate system, wherein the calculation process is as follows:
s31, translating the matrix by rotation
Figure SMS_26
Obtain->
Figure SMS_27
I.e. a transformation matrix of the A-coordinate system into the B-coordinate system, wherein>
Figure SMS_28
,/>
Figure SMS_29
The values are provided by the robot arm parameters;
s32, through the formula
Figure SMS_30
Obtain->
Figure SMS_31
I.e. +.>
Figure SMS_32
Coordinates in the a coordinate system are expressed as:
Figure SMS_33
it is a second object of the present application to provide a tumor localization system comprising:
a tomographic data source for providing tomographic images of human body trunk bones and body surface shapes and related organs, and at least comprising tomographic coordinate information of body surface markers, organs, tumors and blood vessels;
the transition image data source provides a body surface appearance space image and at least comprises space coordinate information of an origin and a body surface marker;
the positioning execution end is provided with positioning coordinates for executing a positioning result, and the positioning coordinates and the space coordinates have a fixed mapping relation;
the identification unit is used for automatically calibrating body surface markers, organ tumors and blood vessels in the subsequent dipstick layer scanning image to generate identification data information;
the three-dimensional reconstruction unit is used for acquiring and processing the identification data information, updating the coordinate information of the marker, the organ, the tumor and the blood vessel, generating integrated three-dimensional model data, extracting a target area from the integrated three-dimensional model data and obtaining tomographic coordinate data of the target area;
the coordinate conversion unit converts the tomographic coordinate data of the target area into spatial coordinate information by associating the tomographic coordinate information with the spatial coordinate information through the body surface marker, and further converts the tomographic coordinate information into positioning coordinate information according to a fixed mapping relation;
and the control and execution unit is used for controlling the position of the positioning execution end according to the positioning coordinate information.
Wherein, the liquid crystal display device comprises a liquid crystal display device,
the identification unit includes:
the image processing module is used for acquiring the tomographic image and carrying out gray scale and sharpening processing;
the image identification module is used for sending the artificially marked tomographic images into a convolutional neural network for training and identifying the region of interest in the tomographic images;
and the image storage module is used for storing the tomographic image information and the coordinate data of the identified region of interest.
The image storage module receives N tomographic images from a tomographic data source and stores all the tomographic images in a data folder, wherein M (M is less than or equal to N) sheets contain tumors;
the image processing module sequentially carries out gray scale processing and sharpening processing on the tomographic images;
the image storage module names the processed image: the first tumor-containing contour information is named as the j th image I j The last image containing tumor contour information is I j+M-1 The method comprises the steps of carrying out a first treatment on the surface of the And finally naming the volume data images according to the acquisition sequence, wherein the naming format can be as follows:
{I 1 ,I 2 ,I 3 ,……,I j ,I j+1 ,……,I j+M-1 ,……,I N };
extracting M images including tumors from an image storage module, and manually calibrating tumor contour lines and blood vessel contour lines (both contour lines are closed) according to the processed gray values;
and sending the manually calibrated image into a convolutional neural network to obtain an image recognition module.
After the pre-operation, intra-operation and post-operation tomographic images are processed through the image storage module and the image processing module, the image recognition module is used for automatically recognizing the contours of the regions of interest such as tumors and blood vessels in the images, and the recognized and calibrated images are restored into the image storage module.
Body surface markers, organs, etc. can also be automatically identified in the same manner as described above.
The three-dimensional reconstruction unit includes:
the first data processing module is used for acquiring information and coordinate data of the image storage module and processing the information and the coordinate data into integrated three-dimensional model data;
and the extraction module is used for extracting target information from the integrated three-dimensional model data.
The first data processing module utilizes the file folder in the image storage module, the coding information of the image, the image information marked with the contour of the region of interest, the coordinate data and the like to carry out three-dimensional reconstruction, and the reconstructed model is an integrated three-dimensional model marked with the contour of interest. The extraction module can select target information in the integrated three-dimensional model.
The integrated three-dimensional model can provide coordinate data information of tumors in each layer of images of tomography and calculate the tumor area in each layer, can also completely display the adjacent tumor vascular environment, and can be matched with internal parameter data such as a collimator of a tomography scanner and the like to accurately estimate the tumor volume. The information can assist doctors in assessing the risk and difficulty of the operation.
The coordinate conversion unit includes:
the space coordinate module is used for acquiring space coordinate information of the body surface marker;
and the second data processing module is used for acquiring the extracted target information of the extraction module, calculating by using a mapping matrix between the space coordinates and the tomographic coordinates and between the space coordinates and the positioning coordinates, and outputting the positioning coordinate information.
In an embodiment of the present application, the tumor positioning system further includes an interaction and feedback unit. In the process of positioning by the surgical robot, a doctor judges the positioning accuracy through a visualization device such as a laparoscope. When the surgical robot performs positioning and has larger deviation, the current data information and the like are fed back to the identification module through the interaction and feedback unit so as to retrain, and the identification calibration accuracy of the identification module is continuously improved.
In an embodiment of the present application, the tumor positioning system further includes a storage and communication unit, which is configured to store all information of the current operation, such as images, coordinates, etc. of the environment of the tumor and adjacent blood vessels, which are identified and positioned in the clinical operation, so as to facilitate subsequent learning, teaching, etc.
The present application also provides a set of computer storage media for storing the above-described units or modules for execution by a computer device processor.
It is a further object of the present application to provide a tumor positioning device for performing the above tumor positioning method.
The tumor positioning device provided by the application comprises:
a surgical robot having a positioning execution coordinate system;
the binocular camera is arranged at the tail end of the mechanical arm and is used for obtaining a space image of the body surface marker;
the tomography scanner obtains body surface markers and tomographic images of tumors, blood vessels and organs through multiple scanning and synchronously forms a tumor image coordinate system;
a tumor positioning system as described above;
wherein, the liquid crystal display device comprises a liquid crystal display device,
the positioning execution tail end of the surgical robot is provided with a double-head conversion clamp, and execution instruments and the binocular camera are respectively fixed.
The binocular camera is used as a transition system and used for constructing a transition image coordinate system, the double-head conversion clamp can be used for carrying out position conversion with a surgical operation instrument, synchronous positions of the double-head conversion clamp and the surgical operation instrument are achieved, small position difference is achieved, the binocular camera is facilitated to obtain a vision field matched with operation of a surgical robot, mapping difficulty of the transition image coordinate system in a positioning execution coordinate system is reduced, and accuracy of coordinates obtained through conversion is improved.
It is still another object of the present application to provide a method for predicting tumor progression using recognition unit and three-dimensional reconstruction unit outcomes in a tumor localization system.
The tumor development prediction method provided by the application specifically comprises the following steps:
t1, acquiring periodic tomographic images of the same benign tumor patient, and adopting an identification unit and a three-dimensional reconstruction unit to obtain:
each phase includes the number of tomographic layers of the tumor,
the tumor contour in each layer of tomographic images,
tumor area in each layer of tomographic images;
t2, obtaining data difference in a preset period to obtain:
a slice number difference value;
tumor contour amplification distance;
increasing the value of the tumor area;
t3, acquiring big data of benign tumor patients, repeating the steps T1 and T2, and constructing a data set;
t4, repeating the step T1 to identify the periodic tomographic image of the patient to be detected, and obtaining the data difference of the periodic image;
and T5, carrying out polling matching on the data set constructed in the step T3 on the data difference obtained in the step T4 to obtain:
the difference between the actual period and the expected period;
the difference between the actual tumor contour amplification distance and the tumor contour amplification distance at the matching period.
The physician can predict the tumor development of the patient to be examined by referring to the differences.
Drawings
For a clearer description of the present application or of the solutions of the prior art, the drawings that are used in the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description that follow are only some embodiments of the present application, from which other drawings can be obtained, without inventive effort, for a person skilled in the art.
FIG. 1 is a schematic view of a tumor positioning apparatus according to embodiment 1 of the present application;
FIG. 2 is a schematic diagram of the workflow of the tumor localization system according to embodiment 1 of the present application;
FIG. 3 is a schematic diagram of the execution result of the identification unit in embodiment 1 of the present application;
fig. 4 is a three-dimensional coordinate diagram of a tumor and a blood vessel in the ij+m layer obtained by the identification unit in embodiment 1 of the present application under the coordinate system C;
FIG. 5 is a schematic diagram of a coordinate system image according to which the Ij+m layer tumor area is calculated in example 1 of the present application;
FIG. 6 is a schematic diagram of reconstructing three-dimensional models of liver, tumor and blood vessel from coordinate points in example 1 of the present application;
FIG. 7 is a schematic diagram of a three-dimensional model reconstructed from liver tomographic image data and torso tomographic data in accordance with example 1 of the present application;
fig. 8 is a second schematic view of the reconstruction of an integrated three-dimensional model from the liver tomographic image data and the torso tomographic image data in example 1 of the present application.
Reference numerals:
1. surgical robot, 11, UR robotic arm;
2. a tomography scanner, 3, a binocular camera, 4, a computer device, 5 and a clamp.
Description of the embodiments
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in numerous different ways without departing from the spirit or scope of the embodiments of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
Example 1
In the embodiment of the application, a tumor positioning device is provided, and a coordinate system of a real-time tomographic image of a CT device is mapped into a coordinate system of a UR mechanical arm through a binocular camera arranged at the UR mechanical arm of a surgical robot, so that any point in the tomographic image can be accurately converted into three-dimensional coordinate data in the coordinate system of the UR mechanical arm.
As shown in fig. 1, the tumor positioning apparatus includes a surgical robot 1, a tomographic scanner 2, a binocular camera 3, and a computer device 4.
The UR robot arm 11 of the surgical robot 1 is responsible for performing accurate positioning;
the tail end of the UR mechanical arm 11 is provided with a double-head conversion clamp 5, and surgical operation instruments and the binocular camera 3 are respectively arranged in the clamp 5.
In this embodiment, the tumor positioning device is used for positioning the surgical robot UR robot in a liver tumor resection operation.
The intraoperative binocular camera acquires the structural information of the external surface of the upper body of a patient according to the period and reconstructs a three-dimensional model of the surface of the human body. Specifically, two cameras are placed separately to shoot a target, and 3D coordinates of the target point under a camera coordinate system are calculated according to a geometric constraint relation between two perspective geometric models.
And acquiring the fault scanning data of the abdominal cavity of the patient according to the period before operation, and introducing medical three-dimensional reconstruction software for reconstructing the three-dimensional model of the tumor in the abdominal cavity and the liver to acquire the relative position relationship between the tissue in the abdominal cavity and the characteristic points of the surface of the human body.
In this embodiment, the nipple and navel of the patient are selected as the characteristic points of the human surface. The image information acquired by the binocular camera and the tomography scanner comprises the characteristic points.
The UR mechanical arm obtains a coordinate system A by taking the position of the robot base as a reference. Coordinate axis direction of robot base coordinate system a: the Z-axis direction is the direction along the joint axis i, and as the robot is placed on the horizontal ground, namely the Z-axis is vertical to the ground and upwards; since the joint axis i and the joint axis i+1 intersect, the X-axis direction is perpendicular to the plane in which the joint axes i and i+1 are located; the Y-axis direction is determined according to the right hand rule.
The binocular camera acquires the pose of the target and parameters in the camera according to the binocular stereoscopic vision principle, and establishes a coordinate system B by taking the center of the camera as an origin. Coordinate axis direction of binocular camera coordinate system B: the Z-axis direction is vertically upward perpendicular to the ground. And calculating the position relation of the human surface characteristic points relative to the coordinate system B by utilizing epipolar geometry calculation and coordinate system conversion.
The center of the patient body is taken as an origin, the left hand direction is the positive direction of the X axis, the back direction is the positive direction of the Y axis, and the head direction is the positive direction of the Z axis when lying in the supine position. And obtaining the position relation of the tissue in the abdominal cavity and the human surface characteristic points relative to a built-in coordinate system C of the tomography scanner through tomography.
In the embodiment, three-dimensional data of liver, intrahepatic tumor and blood vessel are obtained by scanning arterial phase, portal pulse phase and venous phase respectively through three-phase dynamic scanning of the liver of the upper abdomen; and secondly, after the three-stage enhancement scanning is finished, scanning again to obtain three-dimensional data of the human trunk bones and the body surface appearance. And acquiring and training tomographic data, identifying real-time position information of the liver, tumor, blood vessel and human body surface feature points, and then carrying out three-dimensional reconstruction through identified image data to obtain an integrated three-dimensional model. The three-dimensional coordinates of the liver, tumor, blood vessel and human body surface characteristic points in the image coordinate system C can be obtained by the integrated three-dimensional model, and the three-dimensional coordinates of the human body surface characteristic points are set as
Figure SMS_34
The three-dimensional coordinates of any point on the intrahepatic tumor are +.>
Figure SMS_35
The relative position relation between any point on the tumor and the characteristic points of the surface of the human body can be expressed by vectors; />
Figure SMS_36
The binocular camera arranged at the tail end of the mechanical arm calculates the three-dimensional coordinates of the human body surface characteristic points under the binocular camera coordinate system B by using the mapping matrix and the camera internal parameters according to the parallax map, and is set as
Figure SMS_37
The three-dimensional coordinate of any point on the tumor under the binocular camera coordinate system B can be obtained through matrix transformation calculation of vectors in space>
Figure SMS_38
The calculation process is as follows:
from a rotating matrix
Figure SMS_39
Obtain->
Figure SMS_40
Wherein->
Figure SMS_41
Rotating the coordinate system B to obtain a coordinate system in the same direction as the coordinate axis C;
is known to be
Figure SMS_42
Formula->
Figure SMS_43
Obtain->
Figure SMS_44
I.e. +.>
Figure SMS_45
At->
Figure SMS_46
Coordinates in a coordinate system;
by vector algorithm, the method is represented by the formula
Figure SMS_47
Obtain->
Figure SMS_48
I.e. +.>
Figure SMS_49
At->
Figure SMS_50
Coordinates in a coordinate system;
then by rotating the matrix
Figure SMS_51
Obtain->
Figure SMS_52
Finally, by the formula
Figure SMS_53
Obtain->
Figure SMS_54
I.e. +.>
Figure SMS_55
Coordinates in the B coordinate system.
The binocular camera is arranged at the tail end of the mechanical arm, and the three-dimensional coordinate of the origin of the coordinate system B under the robot base coordinate system A can be known by the parameters in the mechanical arm and is set as
Figure SMS_56
And obtaining the three-dimensional space coordinates of any point on the tumor under the robot base coordinate system A through space vector matrix operation again, wherein the calculation process is as follows:
from a rotation-translation matrix
Figure SMS_57
Obtain->
Figure SMS_58
I.e. a transformation matrix of the A-coordinate system into the B-coordinate system, wherein>
Figure SMS_59
,/>
Figure SMS_60
The values are provided by the robot arm parameters;
through the formula
Figure SMS_61
Obtain->
Figure SMS_62
I.e. +.>
Figure SMS_63
Coordinates in the a coordinate system are expressed as:
Figure SMS_64
in sum, any point on the intrahepatic tumor is obtained by the conversion matrix operation among three coordinate systemsThree-dimensional coordinates in coordinate system A
Figure SMS_65
Similarly, all three-dimensional coordinate values of the intrahepatic tumor obtained through three-phase dynamic scanning of the liver of the upper abdomen can be finally obtained through calculation of the method, and the three-dimensional coordinate of the intrahepatic tumor under a robot coordinate system can be obtained, so that the robot system can realize the perception of environmental information such as human body trunk, abdominal environment, organ position and the like.
As shown in fig. 2, there is also provided a tumor positioning system in this embodiment, including:
a tomographic data source for obtaining tomographic image data of human body trunk bones, body surface shapes and related organs from a tomographic scanner, wherein the data at least comprises tomographic coordinate information of body surface markers, organs, tumors and blood vessels;
the transition image data source is used for obtaining body surface appearance space image data from a binocular camera, wherein the data at least comprises space coordinate information of an origin and a body surface marker;
the identification unit is used for automatically calibrating body surface markers, organ tumors and blood vessels in the subsequent dipstick layer scanning image to generate identification data information;
the three-dimensional reconstruction unit is used for acquiring and processing the identification data information, updating the coordinate information of the marker, the organ, the tumor and the blood vessel, generating integrated three-dimensional model data, extracting a target area from the integrated three-dimensional model data and obtaining tomographic coordinate data of the target area;
the coordinate conversion unit converts the tomographic coordinate data of the target area into spatial coordinate information by associating the tomographic coordinate information with the spatial coordinate information through the body surface marker, and further converts the tomographic coordinate information into positioning coordinate information according to a fixed mapping relation;
the control and execution unit is used for controlling the tail end position of the UR mechanical arm according to the positioning coordinate information;
the interaction and feedback unit is used for realizing data interaction and improving the calibration accuracy of the identification unit;
and the storage and communication unit is used for storing all information of the current operation.
Wherein, the liquid crystal display device comprises a liquid crystal display device,
as shown in fig. 3, the recognition unit is mainly used for completing the following operations:
(1) Acquiring N tomographic images (volume data images) from the chest to the navel position acquired by a tomographic scanner, and storing all the tomographic images into a volume data folder, wherein M (M < = N) tomographic images together contain tumor tissues;
(2) And carrying out gray scale processing and sharpening processing on the volume data image. Let the first tumor-containing contour information be the j th, named image I j The last image containing tumor contour information is I j+M-1 Wherein, a certain I is set j+m Layer (m)<M) coordinate information of a certain point on the tumor outline in the volume data image is as follows: (X) 1 ,Y 1 ,Z 1 ) And finally naming the volume data images according to the acquisition sequence, wherein the naming format can be as follows:
{I 1 ,I 2 ,I 3 ,……,I j ,I j+1 ,……,I j+M-1 ,……,I N };
the liver, the blood vessel and the lesion tissue are distinguished according to the difference of gray values in the tomographic images of the tumor outline and the blood vessel outline of M Zhang Baohan, so that the boundary line between the liver and the tumor and the boundary line between the liver and the blood vessel are artificially marked and defined (the boundary line is a closed curve).
And (3) sending the marked image into a convolutional neural network, training to obtain an in-vivo environment space recognition model and a tumor space recognition model, and respectively recognizing tumor contours and blood vessel contours in the subsequent slice scanning image by using the models.
And processing N pieces of three-period dynamic scanning image data to be detected by the identification unit to obtain image data marked with tumor contours and blood vessel contours.
As shown in fig. 4, the three-dimensional reconstruction unit acquires the image data and can process the image data to obtain a certain I j+m Three-dimensional coordinates (X j+m ,Y j+m ,Z j+m )。
As shown in FIG. 5, extract a certain I j+m The tumor profile in the layer, combined with the coordinate system information in example 1, is given by the equation for the closed convex curve C on the plane:
Figure SMS_66
wherein the curve is
Figure SMS_67
、/>
Figure SMS_68
、/>
Figure SMS_69
Is divided into three sections. Then certain I j+m Area S of tumor image in layer j+m The calculation formula of (2) is as follows:
Figure SMS_70
the area information of the tumor image of the layer is calculated.
As shown in fig. 6, the image data are sequentially stacked in the named order, so that a continuum can be formed, and the continuum includes tumor, blood vessel contour and related three-dimensional coordinate data. Similarly, using tomographic image data of the human torso bones and body surface contours to be detected, a continuum including the human surface feature points, liver and their three-dimensional coordinates in the coordinate system C can also be obtained by additionally trained models or directly noted by hand. As shown in fig. 7 and 8, the two continuous volumes are fused to obtain integrated three-dimensional model data including human surface feature points, livers, tumors, blood vessel contours and coordinates, so that the region of interest in the tomographic image layer corresponds to a fixed region of the integrated three-dimensional model.
Extracting image statistics containing tumor contours from integrated three-dimensional model data to an object array to obtain the thickness of a tomographic slice
Figure SMS_71
By means of the formula->
Figure SMS_72
The tumor volume can be calculated.
The data such as the volume of the tumor in the image to be detected, the adjacent condition of the tumor and the blood vessel, the degree of blood vessel density, the cross section area and the like, and the image information can be used as auxiliary references for pre-operation risk assessment and operation difficulty assessment of doctors.
The coordinate conversion unit stores a mapping matrix between the coordinate system B and the coordinate system A and a mapping matrix between the coordinate system B and the coordinate system C and binocular camera internal reference data. In the process of acquiring the three-dimensional coordinates of targets such as tumors and blood vessels in the coordinate system C, the coordinate conversion unit synchronously acquires the three-dimensional coordinates of the body surface markers in the coordinate system B from the transition image data source, then converts the coordinate system according to the stored data (the conversion process can refer to the content in the tumor positioning device), and finally converts the three-dimensional coordinates in the coordinate system C into the three-dimensional coordinates in the coordinate system A.
After the three-dimensional coordinates of the coordinate system A are obtained, the control and execution unit is used for controlling the UR mechanical arm, so that the operation tail end of the UR mechanical arm moves to the corresponding site, and the operation is performed.
In operation, a doctor can subjectively judge the accuracy of the robot in positioning the liver tumor through visualization of the laparoscope, and if a large deviation occurs in positioning the target tumor, the acquired data are sent to the recognition unit for retraining through the interaction and feedback unit, so that the relevant recognition model is optimized.
The tumor positioning system provided by the application can also store the coordinate information of the tumor and adjacent vascular environment identified and positioned by each clinical operation through the storage and communication unit, so as to train the clinical experience and daily basic work of the young doctor for subsequent cultivation.
The above-described image recognition is mainly performed cooperatively by an image processing module, an image recognition module, an image storage module, and the like as computer storage media in the recognition unit by means of a computer device processor.
The above-mentioned integrated three-dimensional model data construction is mainly performed cooperatively by a first data processing module, an extraction module, etc. as a computer storage medium in the three-dimensional reconstruction unit by means of a computer device processor.
The above-mentioned coordinate conversion is mainly performed cooperatively by a spatial coordinate module, a second data processing module, and the like, which are computer storage media in the coordinate conversion unit, by means of a computer device processor.
The control and execution unit, the interaction and feedback unit, the storage and communication unit, etc. described above are also implemented as computer storage media, cooperatively by means of a computer device processor.
Example 2
By using the method for constructing integrated three-dimensional model data in embodiment 1, the tumor development condition can be predicted, and the method specifically comprises the following two parts:
p1. Construction of tumor growth time set Using benign tumor big data
Acquiring a certain number of periodic tomographic images of benign tumor patients, identifying the tumor contours of each layer by using the identification unit provided in the embodiment 1, and acquiring the number of naming files containing tumor and a certain I j+m Tumor area data in slice;
comparing image layer number changes containing tumors in different periods
Comparing the number of layers of images comprising tumors in different periods through file naming, and judging that the tumors longitudinally diffuse if the number of layers is increased;
the tumor areas of the different cycles were compared, in particular:
let M be common in the first tomographic examination result of the patient 1 (M 1 <N) scan images containing tumor tissue, let I be the first image containing tumor contour sections j1 Let M 1 The middle slice of the tomographic image is I j1+m1 The area is defined as S j1+m1
A second film taking and rechecking after a period of time, and the result of the tomography scanning is M 2 (M 1 <=M 2 <=N)The image includes tumor tissue, and the first image including tumor outline is set as I j2 Let M 2 The middle slice of the tomographic image is I j2+m2 The area is defined as S j2+m2
Wherein the method comprises the steps of j2<=j1、j2+M2-1>=j1+M1-1 (equal sign when two examined tumors did not change)
S j1 、S j1+1 、……、S j1+m1-2 、S j1+m1-1 、S j1+m1 、S j1+m1+1 、S j1+m1+2 、……、S j1+M1-1。(m1<M1)
And
S j2 、S j2+1 、……、S j2+m2-2 、S j2+m2-1 、S j2+m2 、S j2+m2+1 、S j2+m2+2 、……、S j2+M2-1。(m2<M1) the slice containing the tumor outline in the first and second tomography results are each layer of images arranged in sequence from top to bottom.
First, the intermediate slice S of those tomographic images containing tumor tissue is scanned twice j1+m1 、S j2+m2 The method comprises the steps of performing one-to-one correspondence with reference, and performing one-to-one correspondence on the rest slices according to the sequence from the middle to the two sides, namely respectively:
……、S j2+m2-2 :S j1+m1-2 、S j2+m2-1 :S j1+m1-1 、(S j2+m2 :S j1+m1 )、S j2+m2+1 :S j1+m1+1 、S j2+m2+2 :S j1+m1+2 、……;
after the area S of some layers of images in the two tomographic images is changed after a period of time is obtained, analyzing the layers of images;
(4) Comparing the preset periodic tumor contour amplification distance, specifically:
respectively importing the image results scanned after two different time intervals into a preset time set, and setting the distance difference of the tumor outline amplification as d in certain layer images with the corresponding change of the two tomography results after one period t t In the range of (d) t1 ~d t2 ) The method comprises the steps of carrying out a first treatment on the surface of the In some layer images with the corresponding change of the two tomographic scanning results after two periods of 2t, the amplification distance of the tumor outline is d 2t In the range of (d) 2t1 ~d 2t2 ) The method comprises the steps of carrying out a first treatment on the surface of the In some layer images with corresponding change of two tomographic scanning results after three periods of 3t, the amplification distance of the tumor outline is d 3t In the range of (d) 3t1 ~d 3t2 ) Corresponding images of certain layers in two scanning shooting after … … n periods nt are changed, and the amplification distance of the tumor outline is d nt In the range of (d) nt1 ~d nt2 ) The method comprises the steps of carrying out a first treatment on the surface of the The duration of the one period may be one month, one quarter, half year or one year;
finally, after one cycle, two cycles and three cycles … … n cycles are obtained, the amplification distance difference of the tumor outline is d respectively in certain layer images with the corresponding change of the two tomography results of the cycle above the patient interval t 、d 2t 、d 3t 、……、d nt
(5) The comparative data obtained by statistics (2) - (4) are time sets of benign tumor growth.
P2. Comparing the time set of the detected case and benign tumor growth (6) obtaining two groups of tomographic images with the interval period of T, identifying the outline of each layer of tumor by using an identification unit, and obtaining the number of the named files containing the tumor and a certain I j+m Tumor area data in slice;
(7) Obtaining the layer number change condition of the periodic tumor image;
(8) Periodic tumor area changes were obtained:
intermediate slice S of those tomographic images which will be scanned twice for tumor tissue j1+m1 、S j2+m2 The method comprises the steps of performing one-to-one correspondence with reference, and performing one-to-one correspondence on the rest slices according to the sequence from the middle to the two sides, namely respectively:
……、S j2+m2-2 :S j1+m1-2 、S j2+m2-1 :S j1+m1-1 、(S j2+m2 :S j1+m1 )、S j2+m2+1 :S j1+m1+1 、S j2+m2+2 :S j1+m1+2 、……,
(9) Obtaining periodic tumor amplification distance variation:
analyzing and calculating the results of two times of tomography, firstly calculating the amplification distance d of the tumor outline caused by the change of the area of certain layer images;
(10) Prediction of tumor development:
(1) taking the time interval of rechecking after the patient gets the tumor from the first tomography shooting as a reference, and carrying out polling with the constructed time set of tumor growth to get the risk of tumor diffusion;
(2) let t=3t be the interval between two shots before and after the patient, the tumor contour is known to be amplified by a distance d when t=3t in the time set of tumor growth constructed 3t In the range of (d) 3t1 ~d 3t2 ) If calculated, the distance d of tumor amplification in some layer images with corresponding changes of the two tomography results and d in the time set of tumor growth 3t The size of the patient is compared, and a doctor is assisted in acquiring the disease condition development condition of the patient.

Claims (9)

1. A method for predicting tumor progression, comprising the steps of:
t1, acquiring periodic tomographic images of the same benign tumor patient, and adopting an identification unit and a three-dimensional reconstruction unit in a tumor positioning system to obtain the following components:
each phase includes the number of tomographic layers of the tumor,
the tumor contour in each layer of tomographic images,
tumor area in each layer of tomographic images;
and T2, obtaining data difference in a preset period to obtain:
a slice number difference value;
tumor contour amplification distance;
increasing the value of the tumor area;
thirdly, acquiring big data of benign tumor patients, repeating the steps T1 and T2, and constructing a data set;
t4. repeating the step T1 to identify the periodic tomographic image of the patient to be detected, and obtaining the data difference of the periodic image;
and T5, carrying out polling matching on the data difference obtained in the step T4 in the data set constructed in the step T3 to obtain:
the difference between the actual tumor contour amplification distance and the tumor contour amplification distance in the matching period;
the tumor localization system comprises:
a tomographic data source for providing tomographic images of the human torso bones, body surface shapes and related organs, and including at least tomographic coordinate information of body surface markers, organs, tumors, blood vessels;
a transition image data source for providing a body surface appearance space image and at least comprising space coordinate information of the body surface marker;
the positioning execution end is provided with positioning coordinates for executing a positioning result, and the positioning coordinates and the space coordinates have a fixed mapping relation;
the identification unit is used for automatically calibrating the body surface markers, the organs, the tumors and the blood vessels in the subsequent dipstick layer scanning image to generate identification data information;
the three-dimensional reconstruction unit is used for acquiring and processing the identification data information, updating coordinate information of the body surface marker, the organ, the tumor and the blood vessel, generating integrated three-dimensional model data, and extracting a target area from the integrated three-dimensional model data to obtain tomographic coordinate data of the target area;
the coordinate conversion unit converts the tomographic coordinate data of the target area into spatial coordinate information through the correlation tomographic coordinate information and the spatial coordinate information of the body surface marker, and further converts the tomographic coordinate data into positioning coordinate information according to the fixed mapping relation;
the control and execution unit is used for controlling the position of the positioning execution end according to the positioning coordinate information;
wherein, the three-dimensional reconstruction unit mainly completes the following workflow:
sequentially stacking the image data marked with the tumor outline and the blood vessel outline, which are obtained by the identification unit, according to the naming sequence to form a continuum; the continuum comprises tumor, vessel contour and related three-dimensional coordinate data;
using the human body trunk bones and body surface appearance tomographic image data to be detected, and obtaining a continuum through an additionally trained model or by manual direct labeling; the continuum comprises body surface markers, liver and related three-dimensional coordinate data;
and fusing the two continuous bodies to obtain integrated three-dimensional model data comprising body surface markers, livers, tumors, blood vessel outlines and coordinates, so that the region of interest in the tomographic image layer corresponds to a fixed region of the integrated three-dimensional model data.
2. The tumor development prediction method according to claim 1, wherein the identification unit comprises:
the image processing module is used for acquiring the tomographic image and carrying out gray scale and sharpening processing;
the image identification module is used for sending the artificially marked tomographic images into a convolutional neural network for training and identifying the region of interest in the tomographic images;
and the image storage module is used for storing the tomographic image information and the coordinate data of the identified region of interest.
3. The tumor development prediction method according to claim 2, wherein the three-dimensional reconstruction unit includes:
the first data processing module is used for acquiring information and coordinate data of the image storage module and processing the information and the coordinate data into the integrated three-dimensional model data;
and the extraction module is used for extracting target information from the integrated three-dimensional model data.
4. The tumor development prediction method according to claim 3, wherein the coordinate conversion unit includes:
the space coordinate module is used for acquiring space coordinate information of the body surface marker;
and the second data processing module is used for acquiring the information of the space coordinate module and the extraction module, calculating by using a mapping matrix between the space coordinate and the tomographic coordinate and between the space coordinate and the positioning coordinate, and outputting the positioning coordinate information.
5. A tumor positioning method based on the tumor positioning system adopted in the tumor development prediction method according to any one of claims 1-4, characterized in that a transition image coordinate system with spatial position correlation with a positioning execution coordinate system is constructed, and the transition image coordinate system is subjected to mapping conversion with the tumor image coordinate system through the same body surface marker, so that any point in the tumor image coordinate system is converted into a three-dimensional coordinate in the positioning execution coordinate system;
wherein, the liquid crystal display device comprises a liquid crystal display device,
the positioning execution coordinate system is used for executing space positioning;
the transition image coordinate system is used for calibrating the space data information of the body surface marker;
the body surface marker is an easily-identified body structure on the surface of the human body;
the tumor image coordinate system is used for marking the volume data information of tumor, blood vessel, organ and body surface markers.
6. The tumor localization method according to claim 5, comprising the specific steps of:
s1, acquiring the body surface marker in the tumor image coordinate system
Figure QLYQS_1
Three-dimensional coordinates of (c): />
Figure QLYQS_2
Any point of the region of interest
Figure QLYQS_3
Coordinates: />
Figure QLYQS_4
The relative positional relationship of any point of the region of interest and the body surface marker is:
Figure QLYQS_5
s2, acquiring three-dimensional coordinates of the body surface marker in the transition image coordinate system
Figure QLYQS_6
Calculating three-dimensional coordinates of any point of the region of interest in the transition image coordinate system as +.>
Figure QLYQS_7
The calculation process is as follows:
s21, by rotating the matrix
Figure QLYQS_8
Obtain->
Figure QLYQS_9
Wherein->
Figure QLYQS_10
Rotating the coordinate system B to obtain a coordinate system in the same direction as the coordinate axis C;
s22. Known
Figure QLYQS_11
By the formula->
Figure QLYQS_12
Obtain->
Figure QLYQS_13
I.e. +.>
Figure QLYQS_14
At->
Figure QLYQS_15
Coordinates in a coordinate system;
s23, through a vector algorithm, the method is represented by the formula
Figure QLYQS_16
Obtain->
Figure QLYQS_17
I.e. +.>
Figure QLYQS_18
At->
Figure QLYQS_19
Coordinates in a coordinate system;
s24, by rotating the matrix
Figure QLYQS_20
Obtain->
Figure QLYQS_21
S25, according to the formula
Figure QLYQS_22
Obtain->
Figure QLYQS_23
I.e. +.>
Figure QLYQS_24
Coordinates in the B coordinate system;
s3, calculating the three-dimensional coordinates of the origin of the transition image coordinate system in the positioning execution coordinate system according to the mapping relation inherent to the spatial position association as
Figure QLYQS_25
Further calculating the three-dimensional coordinates of any point of the region of interest in the positioning execution coordinate system, wherein the calculation process is as follows:
s31, translating the matrix by rotation
Figure QLYQS_26
Obtain->
Figure QLYQS_27
I.e. a transformation matrix of the A-coordinate system into the B-coordinate system, wherein>
Figure QLYQS_28
,/>
Figure QLYQS_29
The values are provided by the robot arm parameters;
s32, through the formula
Figure QLYQS_30
Obtain->
Figure QLYQS_31
I.e. +.>
Figure QLYQS_32
Coordinates in the a coordinate system are expressed as:
Figure QLYQS_33
7. a tumor positioning device characterized in that the tumor positioning method according to claim 5 or 6 is performed.
8. A tumor positioning device, comprising:
a surgical robot having a positioning execution coordinate system;
a binocular camera for obtaining a spatial image of the body surface marker;
the tomography scanner obtains the body surface marker and tomographic images of tumors, blood vessels and organs through multiple times of scanning and synchronously forms a tumor image coordinate system;
the tumor localization system employed in the tumor development prediction method of any one of claims 1 to 4;
wherein, the liquid crystal display device comprises a liquid crystal display device,
the positioning execution tail end of the surgical robot is provided with a double-head conversion clamp, and execution instruments and the binocular camera are respectively fixed.
9. A computer storage medium, wherein a module for storing the tumor localization system employed in the tumor development prediction method according to any one of claims 2 to 4 is executable by a processor of a computer device.
CN202310188753.9A 2023-03-02 2023-03-02 Tumor positioning method, system, device and tumor development prediction method Active CN115919464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310188753.9A CN115919464B (en) 2023-03-02 2023-03-02 Tumor positioning method, system, device and tumor development prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310188753.9A CN115919464B (en) 2023-03-02 2023-03-02 Tumor positioning method, system, device and tumor development prediction method

Publications (2)

Publication Number Publication Date
CN115919464A CN115919464A (en) 2023-04-07
CN115919464B true CN115919464B (en) 2023-06-23

Family

ID=86651022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310188753.9A Active CN115919464B (en) 2023-03-02 2023-03-02 Tumor positioning method, system, device and tumor development prediction method

Country Status (1)

Country Link
CN (1) CN115919464B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101689220A (en) * 2007-04-05 2010-03-31 奥利安实验室有限公司 The system and method that be used for the treatment of, diagnosis and prospective medicine illness takes place
CN105286988A (en) * 2015-10-12 2016-02-03 北京工业大学 CT image-guided liver tumor thermal ablation needle location and navigation system
CN108335304A (en) * 2018-02-07 2018-07-27 华侨大学 A kind of aortic aneurysm dividing method of abdominal CT scan sequence image
CN109087318A (en) * 2018-07-26 2018-12-25 东北大学 A kind of MRI brain tumor image partition method based on optimization U-net network model
CN109758227A (en) * 2019-01-23 2019-05-17 广州安泰创新电子科技有限公司 Tumour ablation analogy method, device, electronic equipment and readable storage medium storing program for executing
CN111839730A (en) * 2020-07-07 2020-10-30 厦门大学附属翔安医院 Photoacoustic imaging surgical navigation platform for guiding tumor resection
WO2021081841A1 (en) * 2019-10-30 2021-05-06 未艾医疗技术(深圳)有限公司 Splenic tumor recognition method based on vrds 4d medical image, and related apparatus
WO2021126370A1 (en) * 2019-12-20 2021-06-24 Genentech, Inc. Automated tumor identification and segmentation with medical images
CN114469342A (en) * 2022-01-17 2022-05-13 四川大学华西医院 Definition method, establishment system and application of tumor margin edge field

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6984211B2 (en) * 2003-01-03 2006-01-10 Mayo Foundation For Medical Education And Research Detection of tumor halos in ultrasound images
US20120135874A1 (en) * 2009-05-08 2012-05-31 The Johns Hopkins University Single molecule spectroscopy for analysis of cell-free nucleic acid biomarkers
EP2650682A1 (en) * 2012-04-09 2013-10-16 Fundació Privada Institut de Recerca Biomèdica Method for the prognosis and treatment of cancer metastasis
US9622821B2 (en) * 2012-10-11 2017-04-18 University of Pittsburgh—of the Commonwealth System of Higher Education System and method for structure-function fusion for surgical interventions
JP6577873B2 (en) * 2013-03-15 2019-09-18 フンダシオ、インスティトゥト、デ、レセルカ、ビオメディカ(イエレベ、バルセロナ)Fundacio Institut De Recerca Biomedica (Irb Barcelona) Methods for prognosis and treatment of cancer metastasis
IL245560B1 (en) * 2016-05-09 2024-01-01 Elbit Systems Ltd Localized optical coherence tomography images for ophthalmological surgical procedures
US20190262212A1 (en) * 2018-02-27 2019-08-29 The Board Of Regents Of The University Of Texas System Methods and devices for biofluid flow assist
CN110368073A (en) * 2019-05-21 2019-10-25 山东大学 A kind of tumor-localizing method and system for tumor puncture
CN110738701B (en) * 2019-10-23 2020-08-11 上海聚慕医疗器械有限公司 Tumor three-dimensional positioning system
CN111640100B (en) * 2020-05-29 2023-12-12 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium
CN112641512B (en) * 2020-12-08 2023-11-10 北京信息科技大学 Spatial registration method applied to preoperative robot planning
CN115661152B (en) * 2022-12-27 2023-04-07 四川大学华西医院 Target development condition analysis method based on model prediction

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101689220A (en) * 2007-04-05 2010-03-31 奥利安实验室有限公司 The system and method that be used for the treatment of, diagnosis and prospective medicine illness takes place
CN105286988A (en) * 2015-10-12 2016-02-03 北京工业大学 CT image-guided liver tumor thermal ablation needle location and navigation system
CN108335304A (en) * 2018-02-07 2018-07-27 华侨大学 A kind of aortic aneurysm dividing method of abdominal CT scan sequence image
CN109087318A (en) * 2018-07-26 2018-12-25 东北大学 A kind of MRI brain tumor image partition method based on optimization U-net network model
CN109758227A (en) * 2019-01-23 2019-05-17 广州安泰创新电子科技有限公司 Tumour ablation analogy method, device, electronic equipment and readable storage medium storing program for executing
WO2021081841A1 (en) * 2019-10-30 2021-05-06 未艾医疗技术(深圳)有限公司 Splenic tumor recognition method based on vrds 4d medical image, and related apparatus
WO2021126370A1 (en) * 2019-12-20 2021-06-24 Genentech, Inc. Automated tumor identification and segmentation with medical images
CN111839730A (en) * 2020-07-07 2020-10-30 厦门大学附属翔安医院 Photoacoustic imaging surgical navigation platform for guiding tumor resection
CN114469342A (en) * 2022-01-17 2022-05-13 四川大学华西医院 Definition method, establishment system and application of tumor margin edge field

Also Published As

Publication number Publication date
CN115919464A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US20180158201A1 (en) Apparatus and method for registering pre-operative image data with intra-operative laparoscopic ultrasound images
US10828106B2 (en) Fiducial marking for image-electromagnetic field registration
CN102999902B (en) Optical guidance positioning navigation method based on CT registration result
EP2584990B1 (en) Focused prostate cancer treatment system
CN101474075B (en) Navigation system of minimal invasive surgery
JP5265091B2 (en) Display of 2D fan-shaped ultrasonic image
JP5345275B2 (en) Superposition of ultrasonic data and pre-acquired image
JP5662638B2 (en) System and method of alignment between fluoroscope and computed tomography for paranasal sinus navigation
EP2081494B1 (en) System and method of compensating for organ deformation
JP4795099B2 (en) Superposition of electroanatomical map and pre-acquired image using ultrasound
WO2015161728A1 (en) Three-dimensional model construction method and device, and image monitoring method and device
JP5622995B2 (en) Display of catheter tip using beam direction for ultrasound system
JP2022507622A (en) Use of optical cords in augmented reality displays
CN109984843B (en) Fracture closed reduction navigation system and method
KR20090059048A (en) Anatomical modeling from a 3-d image and a surface mapping
JP2006305359A (en) Software product for three-dimensional cardiac imaging using ultrasound contour reconstruction
JP2006305358A (en) Three-dimensional cardiac imaging using ultrasound contour reconstruction
JP2007537816A (en) Medical imaging system for mapping the structure of an object
CN115919464B (en) Tumor positioning method, system, device and tumor development prediction method
Patel et al. Improved automatic bone segmentation using large-scale simulated ultrasound data to segment real ultrasound bone surface data
KR20160057024A (en) Markerless 3D Object Tracking Apparatus and Method therefor
JP2022517807A (en) Systems and methods for medical navigation
WO2019153983A1 (en) Surgical scalpel tip thermal imaging system and method therefor
CN116485850A (en) Real-time non-rigid registration method and system for surgical navigation image based on deep learning
Schenkenfelder et al. Elastic registration of abdominal MRI scans and RGB-D images to improve surgical planning of breast reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant