CN116158849A - Vascular intervention navigation system, method, electronic device and readable storage medium - Google Patents

Vascular intervention navigation system, method, electronic device and readable storage medium Download PDF

Info

Publication number
CN116158849A
CN116158849A CN202310140813.XA CN202310140813A CN116158849A CN 116158849 A CN116158849 A CN 116158849A CN 202310140813 A CN202310140813 A CN 202310140813A CN 116158849 A CN116158849 A CN 116158849A
Authority
CN
China
Prior art keywords
blood vessel
image
preoperative
operative
intra
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310140813.XA
Other languages
Chinese (zh)
Inventor
请求不公布姓名
庄晓东
熊麟霏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wimi Robotics Co ltd
Original Assignee
Shenzhen Wimi Robotics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wimi Robotics Co ltd filed Critical Shenzhen Wimi Robotics Co ltd
Priority to CN202310140813.XA priority Critical patent/CN116158849A/en
Publication of CN116158849A publication Critical patent/CN116158849A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/34Trocars; Puncturing needles
    • A61B17/3403Needle locating or guiding means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/06Measuring instruments not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/34Trocars; Puncturing needles
    • A61B17/3403Needle locating or guiding means
    • A61B2017/3413Needle locating or guiding means guided by ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2063Acoustic tracking systems, e.g. using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/06Measuring instruments not otherwise provided for
    • A61B2090/064Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
    • A61B2090/065Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension for measuring contact or contact pressure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention provides a navigation system, a method, electronic equipment and a readable storage medium for vascular interventional operation, wherein the navigation system comprises an ultrasonic imaging device, an image acquisition device and a controller; the image acquisition device is configured to acquire intra-operative patient pose images; the ultrasonic imaging device comprises a mechanical arm and an ultrasonic probe, wherein the ultrasonic probe is configured to acquire blood vessel images in operation; the controller is configured to: matching the pose image of the patient in operation with the acquired preoperative blood vessel image to acquire the intra-operation position information of the target blood vessel; controlling the mechanical arm to drive the ultrasonic probe to move to the initial ultrasonic observation position, and guiding the surgical instrument to puncture into the target blood vessel according to the intraoperative blood vessel image acquired by the ultrasonic probe at the initial ultrasonic observation position; and controlling the mechanical arm to drive the ultrasonic probe to move according to the intraoperative blood vessel image acquired by the ultrasonic probe. The invention can realize the navigation of the vascular interventional operation under the guidance of ultrasound.

Description

Vascular intervention navigation system, method, electronic device and readable storage medium
Technical Field
The invention relates to the technical field of medical instruments, in particular to a vascular intervention operation navigation system, a vascular intervention operation navigation method, electronic equipment and a readable storage medium.
Background
Vascular intervention is a procedure in which a thin and flexible tubular vascular intervention device is inserted into a human body vessel, moved to a target position, and used for treating diseases. Taking aortic valve replacement operation as an example, before operation, a doctor needs to perform image analysis, three-dimensional reconstruction and the like on aortic root and total aortic CTA images of a patient so as to comprehensively evaluate and measure the intravascular diseases of the patient, and plan an operation path, a model of used instruments and the like according to the image analysis; in the operation, a doctor performs the operation under the guidance of X-ray imaging, acquires a digital subtraction angiography image (DSA) by assisting in the injection of a contrast agent, and judges how to continue the pushing of the surgical instrument and the placement of the instrument through naked eye observation and experience judgment. Ultrasound imaging without ionizing radiation is also commonly used in surgery to locate access vessels to guide the physician to complete the puncture, but ultrasound imaging is not used as a navigation solution.
However, the existing navigation system has the following problems:
1. DSA imaging is generally adopted in the intra-operative navigation, a patient and a doctor need to receive a large amount of ionizing radiation irradiation, and in order to weaken the influence of the ionizing radiation, the doctor needs to wear a thick lead coat to perform operation;
2. ultrasonic imaging does not have ionizing radiation, and can be used for completing puncture navigation of an access blood vessel, but can not meet the visual navigation requirement of guiding a guide wire and a catheter to travel in the blood vessel for vascular interventional therapy;
3. The matching difficulty of a CTA image acquired before operation and a DSA image acquired during operation is high, the CTA image is influenced by factors such as the posture of a patient, the dynamic deformation of tissues and organs, inconsistent modal dimensions and the like, a certain error exists in a multi-modal fusion result, and the multi-modal fusion method is limited in assistance to doctors when applied to intra-operative navigation;
4. DSA images are generally used as vessel navigation references in the operation, but two-dimensional DSA images cannot accurately reflect the three-dimensional morphology of the aortic root.
It should be noted that the information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention aims to provide a vascular intervention operation navigation system, a vascular intervention operation navigation method, electronic equipment and a readable storage medium, which can realize vascular intervention operation navigation under ultrasonic guidance and avoid the damage of ionizing radiation to patients and doctors.
In order to achieve the above purpose, the invention provides a vascular interventional operation navigation system, which comprises an ultrasonic imaging device, an image acquisition device and a controller, wherein the ultrasonic imaging device and the image acquisition device are both in communication connection with the controller;
The image acquisition device is configured to acquire intra-operative patient pose images and transmit the patient pose images to the controller;
the ultrasonic imaging device comprises a mechanical arm and an ultrasonic probe mounted at the tail end of the mechanical arm, wherein the mechanical arm is configured to drive the ultrasonic probe to move to different ultrasonic observation positions, and the ultrasonic probe is configured to acquire an intraoperative blood vessel image at each ultrasonic observation position and transmit the intraoperative blood vessel image to the controller;
the controller is configured to implement the steps of:
matching the intra-operative patient pose image with the acquired preoperative blood vessel image to acquire intra-operative position information of a target blood vessel;
controlling the mechanical arm to drive the ultrasonic probe to move to an initial ultrasonic observation position corresponding to an initial section of the target blood vessel according to the intraoperative position information of the target blood vessel, and guiding a surgical instrument to penetrate into the target blood vessel according to an intraoperative blood vessel image acquired by the ultrasonic probe at the initial ultrasonic observation position; and
and controlling the mechanical arm to drive the ultrasonic probe to move according to the intra-operative blood vessel image acquired by the ultrasonic probe, so that each ultrasonic observation position of the ultrasonic probe is always positioned right above the target blood vessel and the surgical instrument is tracked in real time, and guiding the surgical instrument to travel along the target blood vessel until reaching a target area.
Optionally, the ultrasonic imaging device further comprises a force sensor installed on the ultrasonic probe, the force sensor is configured to collect contact force information between the ultrasonic probe and the body surface of the patient and transmit the contact force information to the controller, and the controller is further configured to control the mechanical arm to drive the ultrasonic probe to be attached to the body surface of the patient according to the contact force.
Optionally, the ultrasound imaging device further comprises a couplant delivery assembly mounted to a distal end of the robotic arm, the couplant delivery assembly configured to apply couplant at each ultrasound observation position.
Optionally, the controller is configured to match the intra-operative patient pose image with the acquired pre-operative blood vessel image to acquire intra-operative location information of the access blood vessel by:
dividing the pose image of the patient in operation to obtain a body surface image of the patient in operation;
registering the body surface image of the patient in operation and the preoperative blood vessel image to obtain a first position transformation matrix;
and mapping the preoperative blood vessel center line acquired based on the preoperative blood vessel image to the body surface image of the intraoperative patient according to the first position transformation matrix so as to acquire intraoperative position information of the access blood vessel.
Optionally, the vascular interventional procedure navigation system further comprises a laser projection device communicatively connected to the controller, and the controller is further configured to project the preoperative vascular centerline to the patient body surface according to a mapping result of the preoperative vascular centerline in the intra-operative patient body surface image.
Optionally, the intra-operative patient pose image is a two-dimensional image, and the preoperative blood vessel image is a three-dimensional image;
the controller is configured to register the intra-operative patient body surface image and the pre-operative blood vessel image to obtain a first positional transformation matrix by:
reconstructing the preoperative blood vessel image according to preset digital image reconstruction parameters to obtain a corresponding two-dimensional preoperative simulated blood vessel image;
registering the body surface image of the patient in the operation and the two-dimensional preoperation simulated blood vessel image to obtain a first position transformation matrix;
the controller is configured to map the preoperative vessel centerline to the intra-operative patient body surface image by:
projecting the preoperative blood vessel center line into a two-dimensional preoperative simulated blood vessel center line according to the digital image reconstruction parameters;
And mapping the two-dimensional preoperative simulated vessel center line to the body surface image of the patient in operation according to the first position transformation matrix.
Optionally, the controller is further configured to implement the steps of:
segmenting the preoperative blood vessel image to obtain preoperative blood vessel segmentation results;
acquiring a preoperative blood vessel center line according to the preoperative blood vessel segmentation result; and
straightening the preoperative blood vessel segmentation result according to the preoperative blood vessel center line to obtain a preoperative straightened blood vessel map;
and calculating relevant parameters of the target blood vessel according to the preoperative straightened blood vessel graph.
Optionally, the controller is configured to control the mechanical arm to drive the ultrasonic probe to move by:
step A, acquiring an intraoperative blood vessel image sequence acquired by the ultrasonic probe at a current ultrasonic observation position;
step B, acquiring two-dimensional position deviation information between a blood vessel center point at the current ultrasonic observation position and the ultrasonic probe center point and advancing information of the surgical instrument according to the intraoperative blood vessel image sequence;
step C, acquiring target position information of the mechanical arm according to the position information of the mechanical arm at the current ultrasonic observation position, the two-dimensional position deviation between the blood vessel center point and the ultrasonic probe center point and the advancing information of the surgical instrument;
And D, controlling the mechanical arm to drive the ultrasonic probe to move to the next ultrasonic observation position according to the target position information of the mechanical arm, and returning to execute the step A.
Optionally, the controller is further configured to implement the steps of:
acquiring intraoperative blood vessel center line point cloud data according to the position information of blood vessel center points at different ultrasonic observation positions, the position information of the surgical instrument and the position information of the mechanical arm;
registering the intraoperative vessel centerline point cloud data with preoperative vessel centerline point cloud data acquired based on the preoperative vessel image to acquire a second position transformation matrix; and
and according to the second position transformation matrix and the current position information of the surgical instrument, the surgical instrument is displayed in a fusion mode on a preoperative blood vessel segmentation result acquired based on the preoperative blood vessel image.
Optionally, the ultrasound probe is further configured to acquire an intra-operative target area image and transmit to the controller;
the controller is further configured to implement the steps of:
dividing the intra-operation target area image to obtain an intra-operation target area dividing result;
Dividing the obtained preoperative target area image to obtain a preoperative target area dividing result;
registering the intra-operation target region segmentation result and the pre-operation target region segmentation result to obtain a third position transformation matrix;
and mapping the preoperative target region segmentation result to the intra-operative target region image according to the third position transformation matrix so as to perform fusion display.
Optionally, the controller is further configured to implement the steps of:
acquiring a center line of the preoperative target area according to the segmentation result of the preoperative target area;
straightening the pre-operation target region segmentation result according to the pre-operation target region center line to obtain a pre-operation straightening target region diagram;
and calculating relevant parameters of the target area according to the preoperative straightened target area graph.
In order to achieve the above object, the present invention further provides a navigation method for vascular interventional operation, comprising:
acquiring a pose image and a preoperative blood vessel image of a patient in operation;
matching the intra-operative patient pose image with the preoperative blood vessel image to obtain intra-operative position information of a target blood vessel;
controlling a mechanical arm to drive an ultrasonic probe to move to an initial ultrasonic observation position corresponding to an initial section of the target blood vessel according to the intraoperative position information of the target blood vessel, and guiding a surgical instrument to penetrate into the target blood vessel according to an intraoperative blood vessel image acquired by the ultrasonic probe at the initial ultrasonic observation position; and
And controlling the mechanical arm to drive the ultrasonic probe to move according to the intra-operative blood vessel image acquired by the ultrasonic probe, so that each ultrasonic observation position of the ultrasonic probe is always positioned right above the target blood vessel and the surgical instrument is tracked in real time, and guiding the surgical instrument to travel along the target blood vessel until reaching a target area.
In order to achieve the above object, the present invention further provides an electronic device, including a processor and a memory, where the memory stores a computer program, and the computer program implements the vascular interventional operation navigation method described above when executed by the processor.
To achieve the above object, the present invention further provides a readable storage medium having stored therein a computer program which, when executed by a processor, implements the vascular interventional procedure navigation method described above.
Compared with the prior art, the vascular intervention operation navigation system, the vascular intervention operation navigation method, the electronic equipment and the readable storage medium provided by the invention have the following advantages:
the vascular intervention operation navigation system provided by the invention can initially locate the position of a target blood vessel (such as an aortic blood vessel) in an operation by matching the pose image of the patient in the operation with the preoperative blood vessel image acquired before the operation. The blood vessel image in the operation acquired by the ultrasonic probe is used for replacing DSA image in the prior art to guide blood vessel puncture, and automatically tracking blood vessels and surgical instruments so as to realize automatic navigation, thereby effectively avoiding the injury of ionizing radiation to patients and doctors. In addition, the ultrasonic probe is driven by the mechanical arm to be always positioned right above the target blood vessel, so that the quality of the blood vessel image in the operation acquired by the ultrasonic probe can be effectively ensured, and the blood vessel interventional operation navigation system provided by the invention can realize an accurate navigation function in the operation.
Because the vascular intervention navigation method, the electronic equipment and the readable storage medium provided by the invention belong to the same conception as the vascular intervention navigation system provided by the invention, the vascular intervention navigation method, the electronic equipment and the readable storage medium provided by the invention have all the advantages of the vascular intervention navigation system provided by the invention, so that the beneficial effects of the vascular intervention navigation method, the electronic equipment and the readable storage medium provided by the invention are not repeated one by one.
Drawings
Fig. 1 is a block diagram of a navigation system for vascular interventional operation according to an embodiment of the present invention;
FIG. 2 is a schematic view of a projection of a central line of a blood vessel and a pose image acquisition of a patient according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an application scenario of a vascular interventional operation navigation system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the overall workflow of a controller in a vascular interventional procedure navigation system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a matching process between an intra-operative patient pose image and a preoperative blood vessel image according to an embodiment of the present invention;
FIG. 6a is an image of a body surface of an intraoperative patient provided in accordance with one embodiment of the present invention;
FIG. 6b is a pre-operative blood vessel image provided in accordance with one embodiment of the present invention;
FIG. 6c is a two-dimensional pre-operative simulated blood vessel image corresponding to the pre-operative blood vessel image shown in FIG. 6 b;
FIG. 6d is a two-dimensional pre-operative simulated vessel centerline provided in accordance with a specific example of the present invention;
FIG. 6e is a schematic view showing a fusion display of an intraoperative patient body surface image and a two-dimensional preoperative simulated vessel centerline according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of digital image reconstruction;
FIG. 8 is a schematic diagram of a preoperative blood vessel image analysis flow according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of obtaining a preoperative vessel segmentation result according to an embodiment of the present invention;
FIG. 10 is a schematic flow chart of extracting a blood vessel centerline according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of centerline extraction according to an embodiment of the present invention;
FIG. 12 is a straightened image of a preoperative blood vessel image generated based on a blood vessel centerline, in accordance with a specific example of the present invention;
FIG. 13 is a schematic diagram of a specific flow for controlling the travel of a robot arm according to an embodiment of the present invention;
FIG. 14a is an image of an intraoperative blood vessel provided in accordance with one embodiment of the present invention;
FIG. 14b is a schematic view of a segmentation result of a blood vessel and a surgical instrument according to an embodiment of the present invention;
FIG. 15 is a schematic view of a flow chart of a preoperative and intraoperative vascular fusion display according to an embodiment of the present invention;
FIG. 16a is a sequence of intraoperative blood vessel images provided in accordance with one embodiment of the present invention;
FIG. 16b is a schematic view of segmentation results of the intraoperative vascular image sequence shown in FIG. 16 a;
FIG. 16c is a schematic illustration of an extracted vessel centerline;
FIG. 17 is a schematic view showing a fusion display of an intraoperative surgical instrument on a preoperative vessel segmentation result according to an embodiment of the present invention;
FIG. 18 is a schematic diagram of a flow chart of a fusion display of preoperative and intraoperative target areas according to an embodiment of the present invention;
FIG. 19a is an image of an intraoperative target area provided in accordance with one embodiment of the present invention;
FIG. 19b is a segmentation result of the intra-operative target region image shown in FIG. 19 a;
FIG. 19c is a graph showing the result of segmentation of a pre-operative target region according to an embodiment of the present invention;
FIG. 19d is a schematic view showing the fusion results of the target regions before and during the operation according to an embodiment of the present invention;
fig. 20 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Wherein, the reference numerals are as follows:
an ultrasonic imaging device-100; a mechanical arm-110; an ultrasonic probe-120; column-130; a force sensor-140; a couplant delivery assembly-150; an image acquisition device-200; a controller-300; a display-400; a laser projection device-500;
A processor-610; a communication interface-620; a memory-630; communication bus-640.
Detailed Description
The vascular interventional procedure navigation system, the vascular interventional procedure navigation method, the vascular interventional procedure electronic device and the vascular interventional procedure readable storage medium according to the present invention are described in further detail below with reference to the accompanying drawings and detailed description. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for the purpose of facilitating and clearly aiding in the description of embodiments of the invention. For a better understanding of the invention with objects, features and advantages, refer to the drawings. It should be understood that the structures, proportions, sizes, etc. shown in the drawings are shown only in connection with the present disclosure for the understanding and reading of the present disclosure, and are not intended to limit the scope of the invention, which is defined by the appended claims, and any structural modifications, proportional changes, or dimensional adjustments, which may be made by the present disclosure, should fall within the scope of the present disclosure under the same or similar circumstances as the effects and objectives attained by the present invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, in the description herein, reference to the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples described in this specification and the features of the various embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The invention provides a vascular intervention operation navigation system, a vascular intervention operation navigation method, electronic equipment and a readable storage medium, which can realize vascular intervention operation navigation under ultrasonic guidance and avoid the damage of ionizing radiation to patients and doctors. It should be noted that, as can be appreciated by those skilled in the art, the electronic device provided by the present invention may be configured on the surgical navigation system provided by the present invention, where the electronic device provided by the present invention may be a personal computer, a mobile terminal, etc., and the mobile terminal may be a hardware device with various operating systems, such as a mobile phone, a tablet computer, etc., and the surgical navigation system provided by the present invention may be applied to a vascular interventional operation, such as a heart valve replacement operation (for example, an aortic valve replacement operation), a stent operation of a limb artery or an abdominal aorta, a balloon expansion operation, a filter placement operation, or a mechanical thrombus extraction operation. In addition, as will be appreciated by those skilled in the art, the term "proximal" as used herein refers to the end proximal to the operator and the term "distal" refers to the end distal to the operator, i.e., the end proximal to the patient.
In order to achieve the above-mentioned idea, the present invention provides a vascular intervention navigation system, please refer to fig. 1 to 3, wherein fig. 1 schematically shows a block structure diagram of a vascular intervention navigation system according to an embodiment of the present invention; FIG. 2 schematically illustrates a view of a patient pose image acquisition and a vessel centerline projection provided by an embodiment of the present invention; fig. 3 schematically shows an application scenario of a vascular interventional operation navigation system (not shown in the image acquisition device) according to an embodiment of the present invention. As shown in fig. 1 to 3, the vascular interventional operation navigation system provided by the invention comprises an ultrasonic imaging device 100, an image acquisition device 200 and a controller 300, wherein the ultrasonic imaging device 100 and the image acquisition device 200 are in communication connection with the controller 300; the image acquisition device 200 is configured to acquire intra-operative patient pose images and transmit the patient pose images to the controller 300; the ultrasound imaging apparatus 100 includes a robotic arm 110 and an ultrasound probe 120 mounted to a distal end of the robotic arm 110, the robotic arm 110 being configured to drive the ultrasound probe 120 to move to different ultrasound viewing positions, the ultrasound probe 120 being configured to acquire intra-operative blood vessel images at the ultrasound viewing positions and to transmit the intra-operative blood vessel images to the controller 300. Further, as shown in fig. 3, the ultrasonic imaging apparatus 100 further includes a column 130, and the proximal end of the mechanical arm 110 is mounted on the column 130.
With continued reference to fig. 4, a schematic overall workflow diagram of the controller 300 in the vascular interventional procedure navigation system according to an embodiment of the present invention is schematically shown. As shown in fig. 4, the controller 300 is configured to implement the following steps:
step S100, matching the intra-operative patient pose image with the acquired preoperative blood vessel image to acquire intra-operative position information of a target blood vessel;
step 200, controlling the mechanical arm 110 to drive the ultrasonic probe 120 to move to an initial ultrasonic observation position corresponding to an initial section of the target blood vessel according to the intra-operative position information of the target blood vessel, and guiding a surgical instrument to penetrate into the target blood vessel according to the intra-operative blood vessel image acquired by the ultrasonic probe 120 at the initial ultrasonic observation position; and
step S300, controlling the mechanical arm 110 to drive the ultrasonic probe 120 to move according to the intra-operative blood vessel image acquired by the ultrasonic probe 120, so that each ultrasonic observation position of the ultrasonic probe 120 is always located directly above the target blood vessel and the surgical instrument is tracked in real time, so as to guide the surgical instrument to travel along the target blood vessel until reaching a target area.
Therefore, the vascular intervention operation navigation system provided by the invention can initially locate the position of a target blood vessel (such as an aortic blood vessel) in an operation by matching the pose image of the patient in the operation acquired in the operation with the preoperative blood vessel image acquired in the operation. By adopting the intraoperative blood vessel image acquired by the ultrasonic probe 120 to replace the DSA image in the prior art to guide the blood vessel puncture and automatically track the blood vessel and the surgical instrument to realize automatic navigation, the damage of ionizing radiation to patients and doctors can be effectively avoided. In addition, the ultrasonic probe 120 is driven by the mechanical arm 110 to be always positioned right above the target blood vessel, so that the quality of the blood vessel image in the operation acquired by the ultrasonic probe 120 can be effectively ensured, and the blood vessel interventional operation navigation system provided by the invention can realize an accurate navigation function in the operation.
Specifically, the preoperative blood vessel image is a three-dimensional image composed of a plurality of two-dimensional images, including but not limited to, CTA images of the whole aorta (including iliac arteries, descending aorta, aortic arch, ascending aorta, etc.). The target vessel may be determined based on the preoperative vessel image. The ultrasonic probe 120 may be a two-dimensional ultrasonic probe 120 or a three-dimensional ultrasonic probe 120 (e.g., an ultrasonic volume probe), wherein the three-dimensional ultrasonic probe 120 is preferable; the image capture device 200 includes, but is not limited to, a visible light camera. Specifically, the image capturing device 200 (for example, a visible light camera) is disposed directly above the patient, and after the doctor completes the positioning operation, the controller 300 controls the image capturing device 200 to capture the pose image of the patient during the operation. Optionally, in order to improve the matching effect of the pose image of the patient in the operation and the preoperative blood vessel image, a plurality of markers are arranged in the body surface area of the patient before the preoperative blood vessel image of the patient is acquired; the same marker is also placed at the same location on the body surface region of the patient prior to acquiring the intra-operative patient pose image.
Still further, as shown in fig. 1 and 3, the navigation system provided by the present invention further includes a display 400 communicatively connected to the controller 300, where the display 400 is configured to display the preoperative blood vessel image, the intra-operative patient pose image, a matching result of the preoperative blood vessel image and the intra-operative patient pose image, and the intra-operative blood vessel image.
With continued reference to fig. 3, as shown in fig. 3, in an exemplary embodiment, the ultrasound imaging apparatus 100 further includes a force sensor 140 mounted on the ultrasound probe 120, the force sensor 140 is configured to collect information of a contact force between the ultrasound probe 120 and a patient body surface, and transmit the information of the contact force to the controller 300, and the controller 300 is further configured to control the mechanical arm 110 to drive the ultrasound probe 120 to attach to the patient body surface according to the contact force. Therefore, by controlling the ultrasonic probe 120 to acquire the blood vessel image in operation under the condition of being attached to the body surface of the patient, the quality of the blood vessel image in operation acquired by the ultrasonic probe 120 can be further ensured, and the accuracy of the blood vessel interventional operation navigation system provided by the invention is further improved.
With continued reference to fig. 3, in one exemplary embodiment, as shown in fig. 3, the ultrasound imaging apparatus 100 further includes a couplant delivery assembly 150 mounted to a distal end of the robotic arm 110, the couplant delivery assembly 150 being configured to apply couplant at each ultrasound observation site. Therefore, the couplant conveying assembly 150 is arranged to automatically coat the couplant at each ultrasonic observation position, so that the complicated requirement that the couplant needs to be manually coated in ultrasonic imaging can be avoided, and the standardization of an ultrasonic scanning process can be realized. For the specific structure and working principle of the couplant delivery assembly 150, reference may be made to a delivery tube type feeding device in the prior art, and details thereof will not be described herein.
In an exemplary embodiment, the controller 300 is configured to match the intra-operative patient pose image with the acquired pre-operative blood vessel image to acquire intra-operative location information of the access blood vessel by:
dividing the pose image of the patient in operation to obtain a body surface image of the patient in operation;
registering the body surface image of the patient in operation and the preoperative blood vessel image to obtain a first position transformation matrix;
And mapping the preoperative blood vessel center line acquired based on the preoperative blood vessel image to the body surface image of the intraoperative patient according to the first position transformation matrix so as to acquire intraoperative position information of the access blood vessel.
Specifically, the intra-operative patient pose image may be segmented by using a common image segmentation method such as a threshold segmentation method, a region growing method, a deep learning algorithm, etc., so as to segment a patient body surface region (if a marker is present on the patient body surface, the marker is also segmented at the same time), thereby obtaining an intra-operative patient body surface image. It should be noted that, as understood by those skilled in the art, the access vessel is determined by a doctor according to the preoperative vessel image, the preoperative vessel centerline (for example, the vessel centerline of the whole aorta) also includes the centerline of the access vessel, after the first position transformation matrix is obtained, the coordinate transformation of each center point on the preoperative vessel centerline is performed according to the first position transformation matrix, so that the preoperative vessel centerline obtained by the preoperative vessel image can be mapped to the surface image of the patient in the operation, thereby obtaining the intraoperative position information of the access vessel (that is, the position information corresponding to the access vessel at the surface of the patient), and simultaneously, the converted preoperative vessel centerline can be fused and displayed on the surface image of the patient in the operation.
Further, the intra-operative patient pose image is a two-dimensional image, and the preoperative blood vessel image is a three-dimensional image;
the controller 300 is configured to register the intra-operative patient body surface image and the pre-operative blood vessel image to obtain a first position transformation matrix by:
reconstructing the preoperative blood vessel image according to preset digital image reconstruction parameters to obtain a corresponding two-dimensional preoperative simulated blood vessel image;
registering the intra-operative patient body surface image and the two-dimensional pre-operative simulated vessel image to obtain a first position transformation matrix.
Correspondingly, the controller 300 is configured to map the preoperative vessel centerline to the intra-operative patient body surface image by:
projecting the preoperative blood vessel center line into a two-dimensional preoperative simulated blood vessel center line according to the digital image reconstruction parameters;
and mapping the two-dimensional preoperative simulated vessel center line to the body surface image of the patient in operation according to the first position transformation matrix.
Specifically, please refer to fig. 5, which schematically illustrates a schematic diagram of a matching flow between a pose image of an intraoperative patient and a preoperative blood vessel image according to an embodiment of the present invention. As shown in fig. 5, after acquiring the pose image of the patient during the operation, the pose image of the patient during the operation may be segmented by using a pre-trained first segmentation model to segment the body surface area of the patient (if the body surface of the patient has a marker, the marker is also segmented at the same time), so as to obtain the body surface image of the patient during the operation, and referring to fig. 6a, an image of the body surface of the patient during the operation provided by a specific example of the present invention is schematically shown. By performing digital image reconstruction (DRR, digitally reconstructed radiograph) on the preoperative blood vessel image, the preoperative blood vessel image may be projected into a two-dimensional DRR image in front of and behind to obtain a two-dimensional preoperative simulated blood vessel image, please refer to fig. 6b and fig. 6c, wherein fig. 6b schematically shows the preoperative blood vessel image provided by a specific example of the present invention, and fig. 6c schematically shows the two-dimensional preoperative simulated blood vessel image corresponding to the preoperative blood vessel image shown in fig. 6 b; by performing the same digital image reconstruction operation on the three-dimensional preoperative blood vessel centerline, a two-dimensional preoperative simulated blood vessel centerline can be obtained, and reference is made to fig. 6d, which schematically illustrates the two-dimensional preoperative simulated blood vessel centerline provided by a specific example of the present invention. Feature matching is performed by extracting features from the body surface image of the patient in the operation and the two-dimensional preoperative simulated blood vessel image, and the feature matching method includes, but is not limited to, SIFT (Scale-invariant feature transform, scale invariant feature transform), harris and other feature matching algorithms. A first position transformation matrix can be obtained based on the result of feature matching, and according to the first position transformation matrix, a two-dimensional preoperative simulated blood vessel center line obtained based on a preoperative blood vessel image can be displayed in a fusion manner on the body surface image of the patient in operation, please refer to fig. 6e, which schematically shows a schematic diagram of the fusion display of the body surface image of the patient in operation and the two-dimensional preoperative simulated blood vessel center line, as shown in fig. 6e, by performing the fusion display on the body surface image of the patient in operation and the two-dimensional preoperative simulated blood vessel center line, the positions of the target blood vessel (such as an aorta) and the access blood vessel can be displayed in the body surface image of the patient in operation.
It should be noted that, as those skilled in the art can understand, a registration algorithm other than feature matching may be used to register the body surface image of the patient during the operation and the two-dimensional pre-operation simulated blood vessel image, for example, a mode based on deep learning may be used to obtain an elastic registration deformation field, so as to register the body surface image of the patient during the operation and the two-dimensional pre-operation simulated blood vessel image. It should be further noted that, as those skilled in the art can understand, the specific network structure of the first segmentation model is not limited by the present invention, and the first segmentation model includes, but is not limited to, a neural network model of the uiet, for example, when the pose image of the patient in operation is a two-dimensional image, the first segmentation model for segmenting the pose image of the patient in operation is a neural network model of the 2D-uiet. The training process of the first segmentation model comprises the following steps: the doctor marks the body surface area in the acquired pose training image of the patient as a gold standard during training, then inputs the pose training image of the patient into a first segmentation model which is built in advance to obtain a corresponding segmentation result, calculates a loss function value between the segmentation result and the corresponding gold standard, adjusts network parameters of the first segmentation model according to the loss function value, and indicates that the training of the first segmentation model converges when the loss function value is smaller than or equal to a preset threshold value or converges, so that the training can be ended. Specifically, the loss function value can be calculated by using the Dice coefficient loss function, and the network parameters can be adjusted by using a random gradient descent optimizer.
With continued reference to fig. 7, a schematic diagram of digital image reconstruction is schematically presented. As shown in fig. 7, by setting a virtual X-ray source and a flat panel detector, and performing image reconstruction based on preset digital image reconstruction parameters (including the distance from the virtual X-ray source to the object (three-dimensional preoperative blood vessel image, three-dimensional preoperative blood vessel centerline) and the virtual X-ray source to the flat panel detector), a two-dimensional preoperative simulated blood vessel image and a two-dimensional preoperative simulated blood vessel centerline can be obtained. It should be noted that, as will be understood by those skilled in the art, reference may be made to the prior art for further details regarding the reconstruction of digital images, and no further description is given herein.
With continued reference to fig. 1 and 2, in an exemplary embodiment, as shown in fig. 1 and 2, the vascular interventional procedure navigation system provided by the present invention further includes a laser projection device 500 communicatively connected to the controller 300, and the controller 300 is further configured to project the preoperative vascular centerline (two-dimensional preoperative simulated vascular centerline) onto the body surface of the patient according to the mapping result of the preoperative vascular centerline (two-dimensional preoperative simulated vascular centerline) in the body surface image of the patient. Thus, by projecting the preoperative vessel centerline (two-dimensional preoperative simulated vessel centerline) onto the patient's body surface, it is more convenient for the physician to intraoperatively locate the target vessel (e.g., the aorta) and the access vessel.
In an exemplary embodiment, the controller 300 is further configured to implement the steps of:
segmenting the preoperative blood vessel image to obtain preoperative blood vessel segmentation results;
acquiring a preoperative blood vessel center line according to the preoperative blood vessel segmentation result; and
straightening the preoperative blood vessel segmentation result according to the preoperative blood vessel center line to obtain a preoperative straightened blood vessel map;
and calculating relevant parameters of the target blood vessel according to the preoperative straightened blood vessel graph.
Specifically, the preoperative blood vessel image may be segmented by using a common image segmentation method such as a threshold segmentation method, a region growing method, a deep learning algorithm, etc., so as to segment a blood vessel region (for example, an aortic blood vessel region including a target blood vessel such as an iliac artery, a descending aorta, an aortic arch, and an ascending aorta required for an intraoperative approach) in the preoperative blood vessel image, thereby obtaining a preoperative blood vessel segmentation result. The preoperative blood vessel segmentation result is straightened according to the preoperative blood vessel center line, so that a preoperative straightened blood vessel graph is obtained, and calculation of relevant parameters (including but not limited to blood vessel diameter, tortuosity and the like) of a target blood vessel is carried out based on the preoperative straightened blood vessel graph, so that accurate measurement of blood vessel parameters can be realized, and reference basis is provided for intervention, advancing and conveying of surgical instruments such as a guide wire, a catheter, a prosthetic valve and the like. It should be noted that, as those skilled in the art can understand, the extraction method of the preoperative blood vessel centerline includes, but is not limited to, tracking method, light projection method, morphological method, and the like. It should also be noted that, as will be appreciated by those skilled in the art, in other embodiments, the pre-operative vessel centerline may be extracted directly from the pre-operative vessel image.
With continued reference to fig. 8, a schematic diagram of a preoperative blood vessel image analysis flow provided in an embodiment of the invention is schematically shown. As shown in fig. 8, for a preoperative blood vessel image (e.g. aortic CTA image), segmentation may be performed based on a pre-trained second segmentation model to obtain a preoperative blood vessel segmentation result (including the iliac artery, the descending aorta, the aortic arch, the ascending aorta and other target blood vessels required for the intraoperative approach), referring specifically to fig. 9, a schematic diagram of obtaining a preoperative blood vessel segmentation result according to a specific example of the present invention is schematically shown. The preoperative blood vessel center line can be extracted by extracting the center line of the preoperative blood vessel segmentation result, the preoperative blood vessel segmentation result is straightened according to the extracted preoperative blood vessel center line, a corresponding preoperative straightened blood vessel graph can be generated, and the diameter, tortuosity and other blood vessel parameters of the target blood vessel can be obtained by quantitatively analyzing the preoperative straightened blood vessel graph. Furthermore, the pre-operation blood vessel image can be straightened according to the extracted pre-operation blood vessel center line so as to generate a corresponding straightened image, and the obtained straightened image can be marked with relevant parameters of the target blood vessel. It should be noted that, as those skilled in the art can understand, the specific network structure of the second segmentation model is not limited in the present invention, and the second segmentation model includes, but is not limited to, a 3D-Unet neural network model, and the training process of the second segmentation model is similar to that of the first segmentation model, and will not be described herein.
With continued reference to fig. 10, a schematic flow chart of extracting a blood vessel centerline according to an embodiment of the present invention is schematically shown. As shown in fig. 10, after the pre-operation segmentation result is obtained, the contour extraction is performed on the vessel segment of interest in the pre-operation segmentation result, so that the corresponding region boundary can be extracted, then the region boundary is traversed according to a certain sequence, and the maximum inscribed sphere is found for the current region boundary through the euclidean distance, so that the maximum inscribed sphere center is the point where the vessel center line passes through the region boundary. After the region boundary is traversed, all the found connecting lines of the inscribed sphere centers are the central line of the vessel segment of interest, and referring specifically to fig. 11, a schematic drawing of the central line extraction provided by a specific example of the present invention is schematically shown. It should be noted that, as those skilled in the art will understand, the preoperative vessel centerline may be extracted by other vessel centerline extraction methods in the prior art, besides the maximum inscribed balloon method shown in fig. 10, which is not limited in this invention. After extracting the central line of the blood vessel, based on the coordinates of each central point on the central line, the normal vector corresponding to each central point and other parameters, a transformation matrix is calculated, wherein the original image (for example, the preoperative blood vessel image) is mapped to the straightened image, namely, the original image (for example, the preoperative blood vessel image) is mapped to the straightened image by a linear interpolation method, so that a corresponding straightened image is obtained, and please refer to fig. 12, which schematically shows the straightened image of the preoperative blood vessel image based on the generation of the central line of the blood vessel provided by a specific example of the invention.
In an exemplary embodiment, the controller 300 is configured to control the manipulator 110 to drive the ultrasonic probe 120 to move by:
step A, acquiring an intraoperative blood vessel image sequence acquired by the ultrasonic probe 120 at the current ultrasonic observation position;
step B, acquiring two-dimensional position deviation information between a blood vessel center point at the current ultrasonic observation position and the ultrasonic probe 120 center point and advancing information of the surgical instrument according to the intraoperative blood vessel image sequence;
step C, acquiring target position information of the mechanical arm 110 according to the position information of the mechanical arm 110 at the current ultrasonic observation position, the two-dimensional position deviation between the blood vessel center point and the ultrasonic probe 120 center point and the travelling information of the surgical instrument;
and D, controlling the mechanical arm 110 to drive the ultrasonic probe 120 to move to the next ultrasonic observation position according to the target position information of the mechanical arm 110, and returning to execute the step A.
Specifically, please refer to fig. 13, which schematically illustrates a specific flow chart for controlling the travel of the mechanical arm 110 according to an embodiment of the present invention. As shown in fig. 13, firstly, based on the fusion result of pose image (i.e. the fusion result of the body surface image of the patient in operation and the center line of the blood vessel before operation), the mechanical arm 110 is controlled to move to the position corresponding to the initial segment of the target blood vessel, so as to move the ultrasonic probe 120 to the initial ultrasonic observation position, the couplant is automatically coated by the couplant conveying component 150, the ultrasonic probe 120 is guided by the force sensor 140 to press the skin of the patient, then the image of the blood vessel in operation is obtained in real time, the obtained image of the blood vessel in operation is segmented to obtain the segmentation result of the blood vessel and the surgical instrument, and puncture can be guided according to the segmentation result of the blood vessel and the surgical instrument; after guiding the surgical instrument to penetrate into the target vessel, a new sequence of intra-operative vessel images (including multiple operations acquired at different sampling instants) is acquired at the initial ultrasound observation position according to the ultrasound probe 120 Middle blood vessel image), the position coordinates of the blood vessel center point and the surgical instrument under the coordinate system of the ultrasonic probe 120 are obtained, and the two-dimensional position deviation O between the blood vessel center point and the ultrasonic probe 120 can be calculated according to the position coordinates of the blood vessel center point v =(x v,o ,y v,o 0); calculating the advancing information O of the surgical instrument according to the difference of the front-back change of the position of the surgical instrument in the blood vessel image sequence in the operation newly acquired by the ultrasonic probe 120 at the initial ultrasonic observation position e =(x e,o ,y e,o ,z e,o ) Specifically, the travel information of the surgical instrument may be calculated from the position information of the surgical instrument obtained based on the first frame of the intra-operative blood vessel image in the newly acquired sequence of intra-operative blood vessel images and the position information of the surgical instrument obtained based on the last frame of intra-operative blood vessel image. Assume that the coordinates of the manipulator 110 at the current ultrasound observation position (coordinates in the base coordinate system of the manipulator 110) are C r =(x r ,y r ,z r ) The coordinates of the robot arm 110 after traveling (i.e., the coordinates of the target position) are C r ′=C r +O v +O e . Thus, the mechanical arm 110 is controlled to travel based on the calculated target position coordinates, so that the target blood vessel is always positioned right below the ultrasonic probe 120, and the ultrasonic probe 120 can automatically track the surgical instrument. Repeating the above steps may control the robotic arm 110 to guide the ultrasound probe 120 to automatically track surgical instruments such as guidewires, catheters, prosthetic valves, etc. until the surgical instruments reach a target area (e.g., the aortic root). It should be noted that, as those skilled in the art will appreciate, at the same ultrasound observation position, the position of the center point of the blood vessel is unchanged, while the position of the surgical instrument is changed continuously as the surgical instrument advances.
Further, in some embodiments, the physician may determine whether the surgical instrument has reached a target area (e.g., the aortic root), and the criteria for the determination is generally that the surgical instrument has reached the aortic root if the target area is visible in the intra-operative blood vessel image acquired by the ultrasound probe 120, e.g., if the aorta, aortic valve, and outflow tract are visible in the intra-operative blood vessel image acquired by the ultrasound probe 120. In other embodiments, a pre-trained classification model may be used to determine whether the target region (e.g., the aortic root) is included in the intra-operative blood vessel image acquired by the ultrasound probe 120 based on a deep learning approach.
In an exemplary embodiment, the intra-operative vessel image may be segmented using a pre-trained third segmentation model to obtain segmentation results for vessels and surgical instruments. Referring to fig. 14a and 14b, fig. 14a is an intra-operative blood vessel image according to an embodiment of the present invention; fig. 14b is a schematic view showing a segmentation result of a blood vessel and a surgical instrument according to an embodiment of the present invention. It should be noted that, as can be understood by those skilled in the art, the specific network structure of the third segmentation model is not limited in the present invention, and the third segmentation model includes, but is not limited to, a 2D-Unet neural network model, and the training process of the third segmentation model is similar to that of the first segmentation model, and will not be described herein. It should be further noted that, as those skilled in the art will appreciate, other image segmentation methods in the prior art may be used to segment the blood vessel image during the operation, which is not limited in this regard.
In an exemplary embodiment, the controller 300 is further configured to implement the steps of:
acquiring intraoperative blood vessel center line point cloud data according to the position information of blood vessel center points at different ultrasonic observation positions, the position information of the surgical instrument and the position information of the mechanical arm 110;
registering the intraoperative vessel centerline point cloud data with preoperative vessel centerline point cloud data acquired based on the preoperative vessel image to acquire a second position transformation matrix; and
and according to the second position transformation matrix and the current position information of the surgical instrument, the surgical instrument is displayed in a fusion mode on a preoperative blood vessel segmentation result acquired based on the preoperative blood vessel image.
Specifically, please refer to fig. 15, which schematically illustrates a flow chart of a preoperative and intraoperative vascular fusion display provided in an embodiment of the present invention. As shown in fig. 15, according to the position coordinates of the blood vessel and the surgical instrument acquired during the operation, the center line of the blood vessel during the operation can be extracted, the second position transformation matrix can be acquired by performing 3D point cloud registration on the center line of the blood vessel during the operation and the center line of the blood vessel before the operation, and then the position coordinates of the surgical instrument acquired based on the segmentation result of the surgical instrument of the currently acquired image of the blood vessel during the operation are converted, so that the current position information of the surgical instrument can be mapped onto the segmentation result of the blood vessel before the operation for 3D fusion display. Therefore, by mapping the current position information of the surgical instrument to the preoperative blood vessel segmentation result for 3D fusion display, real-time tracking of the surgical instrument can be realized, so that more accurate intraoperative visual navigation is provided on the preoperative three-dimensional image.
It should be noted that, for each ultrasound observation position, according to the position information of the surgical instrument at the ultrasound observation position (the position information under the coordinate system of the ultrasound probe 120) and the position information of the mechanical arm 110, the position information of the surgical instrument under the base coordinate system of the mechanical arm 110 (the coordinate system created with a certain point on the upright 130 as the origin) may be obtained, and the position information of the surgical instrument under the base coordinate system of the mechanical arm 110 may be corrected by the position information of the blood vessel center point at the ultrasound observation position (the position information under the base coordinate system of the mechanical arm 110) to ensure that the extracted intraoperative blood vessel center line is closer to the actual blood vessel center line, that is, the three-dimensional position information of one of the blood vessel center points at the ultrasound observation position under the base coordinate system of the mechanical arm 110 may be obtained. According to the three-dimensional position information of the blood vessel center point at all ultrasonic observation positions under the base coordinate system of the mechanical arm 110, the intraoperative blood vessel center line point cloud data can be obtained. It should be noted that, as will be understood by those skilled in the art, the position of the surgical instrument is continuously changed at the same ultrasound observation position, so that for each ultrasound observation position, the position information of a plurality of blood vessel center points under the base coordinate system of the mechanical arm 110 can be obtained, that is, for each ultrasound observation position, a plurality of blood vessel center points can be extracted. Preferably, in order to ensure the reliability of the data in the operation, a filter such as a multidimensional kalman filter may be used, but is not limited to, to filter the three-dimensional position information of the acquired blood vessel center point under the base coordinate system of the mechanical arm 110 in real time, so as to obtain an optimal solution of the blood vessel center line in the operation. It should be noted that, as those skilled in the art can understand, since the surgical instrument is traveling along the target blood vessel, the movement track of the surgical instrument may reflect the trend of the target blood vessel, but since the surgical instrument is not always moving along the center line of the target blood vessel, the movement track of the surgical instrument may be corrected to coincide with the center line of the target blood vessel (i.e., the center line of the blood vessel during operation) by correcting the three-dimensional position information of the surgical instrument based on the position information of the center point of the blood vessel obtained by the image of the blood vessel during operation.
With continued reference to fig. 16 a-16 c, fig. 16a schematically illustrates a sequence of intra-operative blood vessel images according to a specific example of the present invention; FIG. 16b schematically shows a segmentation result of the sequence of intra-operative vessel images shown in FIG. 16 a; fig. 16c is a schematic view of an extracted vessel centerline. As shown in fig. 16a to 16c, after acquiring a plurality of intra-operative blood vessel images along a target blood vessel and dividing the intra-operative blood vessel images by a third division model, acquiring a division result including blood vessels, guide wires, catheters, prosthetic valves and other surgical instruments, for each frame of intra-operative blood vessel images, acquiring position information of the surgical instruments under the coordinate system of the ultrasonic probe 120 according to the division result of the intra-operative blood vessel images, and combining the three-dimensional coordinates of the mechanical arm 110 during the intra-operative blood vessel image acquisition to acquire three-dimensional position information of the surgical instruments under the base coordinate system of the mechanical arm 110 during the intra-operative blood vessel image acquisition, and correcting the three-dimensional position information of the surgical instruments under the base coordinate system of the mechanical arm 110 by combining the position information of the blood vessel center point during the intra-operative blood vessel image acquisition to acquire the three-dimensional coordinates of one of the center points on the intra-operative blood vessel center line.
Further, an ICP (Iterative closet point) registration algorithm may be adopted to register the intra-operative vessel centerline point cloud data and the pre-operative vessel centerline point cloud data to obtain a second position transformation matrix, and then the position coordinates of the obtained surgical instrument under the base coordinate system of the mechanical arm 110 may be mapped onto the pre-operative vessel segmentation result according to the second position transformation matrix for fusion display, and please refer to fig. 17, which schematically shows a fusion display schematic diagram of the intra-operative surgical instrument on the pre-operative vessel segmentation result provided by a specific example of the present invention. It should be noted that, as those skilled in the art can understand, more contents regarding the ICP registration algorithm can refer to the prior art, and will not be described herein.
In an exemplary embodiment, the ultrasound probe 120 is further configured to acquire intra-operative target area images and transmit to the controller 300;
the controller 300 is further configured to implement the following steps:
dividing the intra-operation target area image to obtain an intra-operation target area dividing result;
dividing the obtained preoperative target area image to obtain a preoperative target area dividing result;
Registering the intra-operation target region segmentation result and the pre-operation target region segmentation result to obtain a third position transformation matrix;
and mapping the preoperative target region segmentation result to the intra-operative target region image according to the third position transformation matrix so as to perform fusion display.
Specifically, please refer to fig. 18, which schematically illustrates a flow chart of the fusion display of the target region before and during the operation according to an embodiment of the present invention. As shown in fig. 18, whether the surgical instrument reaches a target area (for example, the aortic root) is determined according to the intra-operative blood vessel image acquired in real time, if the surgical instrument reaches the target area, the intra-operative target area image is acquired and segmented to acquire an intra-operative target area segmentation result, the intra-operative target area segmentation result is registered with a pre-operative target area segmentation result, and the pre-operative target area segmentation result is mapped onto the intra-operative target area image for fusion display based on the registration result.
With continued reference to fig. 19a, an image of an intraoperative target area provided by one embodiment of the present invention is schematically illustrated. As shown in fig. 19a, when the ultrasound probe 120 is an ultrasound volume probe, the intra-operative target area image is composed of a plurality of two-dimensional images. Further, a pre-trained fourth segmentation model may be used to segment a plurality of two-dimensional images in the intra-operative target area image, where the segmentation results of the plurality of two-dimensional images form a three-dimensional intra-operative target area segmentation result, please refer to fig. 19b, which schematically shows the segmentation result of the intra-operative target area image shown in fig. 19a, taking the target area as the aortic root as an example, where the segmentation result of the intra-operative target area image includes the aorta and the outflow tract. With continued reference to fig. 19c, a pre-operative target region segmentation result provided by a specific example of the present invention may specifically be obtained by segmenting the pre-operative target region image using a fifth segmentation model trained in advance. Preferably, in order to improve the registration effect, the intra-operation target region segmentation result and the pre-operation target region segmentation result may be both converted into 3D point cloud data, so that the 3D point cloud data of the intra-operation target region segmentation result and the 3D point cloud data of the pre-operation target region segmentation result are registered by adopting an ICP registration algorithm, thereby obtaining a third position transformation matrix. Please continue to fig. 19d, which is a schematic diagram illustrating the result of fusion of the target region before and during the operation according to an embodiment of the present invention. Therefore, by fusion display of the preoperative target region segmentation result on the intra-operative target region image, more accurate visual guidance can be provided for a doctor to release surgical instruments such as a prosthetic valve. It should be noted that, as those skilled in the art can understand, the specific network structures of the fourth segmentation model and the fifth segmentation model are not limited in the present invention, the fourth segmentation model includes but is not limited to a 2D-Unet neural network model, and the fifth segmentation model includes but is not limited to a 3D-Unet neural network model; the training process of the fourth segmentation model and the fifth segmentation model is similar to the training process of the first segmentation model, and will not be described in detail herein. It should be further noted that, as those skilled in the art can understand, other image segmentation methods in the prior art may be used to segment the intra-operative target area image and the pre-operative target area image, which is not limited in this aspect of the present invention.
In an exemplary embodiment, the controller 300 is further configured to implement the steps of:
acquiring a center line of the preoperative target area according to the segmentation result of the preoperative target area;
straightening the pre-operation target region segmentation result according to the pre-operation target region center line to obtain a pre-operation straightening target region diagram;
and calculating relevant parameters of the target area according to the preoperative straightened target area graph.
Specifically, the related content about how to acquire the preoperative target area centerline may refer to the related content about how to acquire the preoperative blood vessel centerline, and the related content about how to perform the straightening process on the preoperative target area segmentation result according to the preoperative target area centerline may refer to the related content about how to perform the straightening process on the preoperative blood vessel segmentation result according to the preoperative blood vessel centerline, so that the description thereof will not be repeated herein. Taking the target area as the aortic root as an example, by carrying out quantitative analysis on the pre-operation aortic root straightening diagram, parameters such as the diameter, the area, the perimeter and the like of planes such as an aortic valve, an outflow tract, a sinus and the like can be obtained, and the parameter analysis of the aortic valve is completed, so that a basis is provided for selecting a proper artificial valve.
Based on the same inventive concept, the invention also provides a navigation method for vascular interventional operation, comprising the following steps:
acquiring a pose image and a preoperative blood vessel image of a patient in operation;
matching the intra-operative patient pose image with the preoperative blood vessel image to obtain intra-operative position information of a target blood vessel;
controlling a mechanical arm to drive an ultrasonic probe to move to an initial ultrasonic observation position corresponding to an initial section of the target blood vessel according to the intraoperative position information of the target blood vessel, and guiding a surgical instrument to penetrate into the target blood vessel according to an intraoperative blood vessel image acquired by the ultrasonic probe at the initial ultrasonic observation position; and
and controlling the mechanical arm to drive the ultrasonic probe to move according to the intra-operative blood vessel image acquired by the ultrasonic probe, so that each ultrasonic observation position of the ultrasonic probe is always positioned right above the target blood vessel and the surgical instrument is tracked in real time, and guiding the surgical instrument to travel along the target blood vessel until reaching a target area.
Therefore, the navigation method for the vascular interventional operation provided by the invention can be used for realizing automatic navigation by adopting the intra-operative vascular image acquired by the ultrasonic probe to replace DSA image-guided vascular puncture in the prior art and automatically tracking blood vessels and surgical instruments, and can be used for effectively avoiding the injury of ionizing radiation to patients and doctors. In addition, the vascular interventional operation navigation method provided by the invention drives the ultrasonic probe to be always positioned right above the target blood vessel through the mechanical arm, so that the quality of the blood vessel image in operation acquired by the ultrasonic probe can be effectively ensured, and the accurate navigation function in operation is realized.
In an exemplary embodiment, the matching the intra-operative patient pose image and the pre-operative blood vessel image to obtain intra-operative location information of a target blood vessel includes:
dividing the pose image of the patient in operation to obtain a body surface image of the patient in operation;
registering the body surface image of the patient in operation and the preoperative blood vessel image to obtain a first position transformation matrix;
and mapping the preoperative blood vessel center line acquired based on the preoperative blood vessel image to the body surface image of the intraoperative patient according to the first position transformation matrix so as to acquire intraoperative position information of the access blood vessel.
In an exemplary embodiment, the intra-operative patient pose image is a two-dimensional image and the pre-operative vessel image is a three-dimensional image; the registering the intra-operative patient body surface image and the pre-operative blood vessel image to obtain a first positional transformation matrix includes:
reconstructing the preoperative blood vessel image according to preset digital image reconstruction parameters to obtain a corresponding two-dimensional preoperative simulated blood vessel image;
registering the body surface image of the patient in the operation and the two-dimensional preoperation simulated blood vessel image to obtain a first position transformation matrix;
The mapping the preoperative vessel centerline to the intra-operative patient body surface image comprises:
projecting the preoperative blood vessel center line into a two-dimensional preoperative simulated blood vessel center line according to the digital image reconstruction parameters;
and mapping the two-dimensional preoperative simulated vessel center line to the body surface image of the patient in operation according to the first position transformation matrix.
In an exemplary embodiment, the vascular interventional operation navigation method provided by the invention further comprises the following steps:
segmenting the preoperative blood vessel image to obtain preoperative blood vessel segmentation results;
acquiring a preoperative blood vessel center line according to the preoperative blood vessel segmentation result; and
straightening the preoperative blood vessel segmentation result according to the preoperative blood vessel center line to obtain a preoperative straightened blood vessel map;
and calculating relevant parameters of the target blood vessel according to the preoperative straightened blood vessel graph.
In an exemplary embodiment, the controlling the mechanical arm to drive the ultrasonic probe to move according to the intra-operative blood vessel image acquired by the ultrasonic probe includes:
step A, acquiring an intraoperative blood vessel image sequence acquired by the ultrasonic probe at a current ultrasonic observation position;
Step B, acquiring two-dimensional position deviation information between a blood vessel center point at the current ultrasonic observation position and the ultrasonic probe center point and advancing information of the surgical instrument according to the intraoperative blood vessel image sequence;
step C, acquiring target position information of the mechanical arm according to the position information of the mechanical arm at the current ultrasonic observation position, the two-dimensional position deviation between the blood vessel center point and the ultrasonic probe center point and the advancing information of the surgical instrument;
and D, controlling the mechanical arm to drive the ultrasonic probe to move to the next ultrasonic observation position according to the target position information of the mechanical arm, and returning to execute the step A.
In an exemplary embodiment, the vascular interventional operation navigation method provided by the invention further comprises the following steps:
acquiring intraoperative blood vessel center line point cloud data according to the position information of blood vessel center points at different ultrasonic observation positions, the position information of the surgical instrument and the position information of the mechanical arm;
registering the intraoperative vessel centerline point cloud data with preoperative vessel centerline point cloud data acquired based on the preoperative vessel image to acquire a second position transformation matrix; and
And according to the second position transformation matrix and the current position information of the surgical instrument, the surgical instrument is displayed in a fusion mode on a preoperative blood vessel segmentation result acquired based on the preoperative blood vessel image.
In an exemplary embodiment, the vascular interventional operation navigation method provided by the invention further comprises the following steps:
dividing the acquired intra-operative target area image to acquire an intra-operative target area dividing result;
dividing the obtained preoperative target area image to obtain a preoperative target area dividing result;
registering the intra-operation target region segmentation result and the pre-operation target region segmentation result to obtain a third position transformation matrix;
and mapping the preoperative target region segmentation result to the intra-operative target region image according to the third position transformation matrix so as to perform fusion display.
In an exemplary embodiment, the vascular interventional operation navigation method provided by the invention further comprises the following steps:
acquiring a center line of the preoperative target area according to the segmentation result of the preoperative target area;
straightening the pre-operation target region segmentation result according to the pre-operation target region center line to obtain a pre-operation straightening target region diagram;
And calculating relevant parameters of the target area according to the preoperative straightened target area graph.
Based on the same inventive concept, the present invention further provides an electronic device, please refer to fig. 20, which schematically shows a block structure schematic diagram of the electronic device according to an embodiment of the present invention. As shown in fig. 20, the electronic device includes a processor 610 and a memory 630, where the memory 630 stores a computer program that, when executed by the processor 610, implements the vascular interventional procedure navigation method described above.
Therefore, the electronic equipment provided by the invention can be used for replacing DSA images in the prior art by adopting the intraoperative blood vessel images acquired by the ultrasonic probe to guide blood vessel puncture, automatically tracking blood vessels and surgical instruments to realize automatic navigation, and can be used for effectively avoiding the injury of ionizing radiation to patients and doctors. In addition, the electronic equipment provided by the invention drives the ultrasonic probe to be always positioned right above the target blood vessel through the mechanical arm, so that the quality of the blood vessel image in the operation acquired by the ultrasonic probe can be effectively ensured, and the accurate navigation function in the operation is realized.
Further, as shown in fig. 20, the electronic device further includes a communication interface 620 and a communication bus 640, wherein the processor 610, the communication interface 620, and the memory 630 communicate with each other via the communication bus 640. The communication bus 640 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The communication bus 640 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface 620 is used for communication between the electronic device and other devices.
The processor 610 of the present invention may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 610 is the control center of the electronic device and connects the various parts of the overall electronic device using various interfaces and lines.
The memory 630 may be used to store the computer program, and the processor 610 implements various functions of the electronic device by running or executing the computer program stored in the memory 630 and invoking data stored in the memory 630.
The memory 630 may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The present invention also provides a readable storage medium having stored therein a computer program which, when executed by a processor, can implement the vascular interventional procedure navigation method described above. Therefore, the readable storage medium provided by the invention can be used for replacing DSA images in the prior art by adopting the intraoperative blood vessel images acquired by the ultrasonic probe to guide blood vessel puncture, automatically tracking blood vessels and surgical instruments to realize automatic navigation, and can be used for effectively avoiding the injury of ionizing radiation to patients and doctors. In addition, the readable storage medium provided by the invention drives the ultrasonic probe to be always positioned right above the target blood vessel through the mechanical arm, so that the quality of the blood vessel image in operation acquired by the ultrasonic probe can be effectively ensured, and the accurate navigation function in operation can be realized.
It should be noted that the present invention provides a readable storage medium that can employ any combination of one or more computer readable media. The readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer hard disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In summary, compared with the prior art, the vascular intervention operation navigation system, the vascular intervention operation navigation method, the electronic device and the readable storage medium provided by the invention have the following advantages:
the vascular intervention operation navigation system provided by the invention can initially locate the position of a target blood vessel (such as an aortic blood vessel) in an operation by matching the pose image of the patient in the operation with the preoperative blood vessel image acquired before the operation. The blood vessel image in the operation acquired by the ultrasonic probe is used for replacing DSA image in the prior art to guide blood vessel puncture, and automatically tracking blood vessels and surgical instruments so as to realize automatic navigation, thereby effectively avoiding the injury of ionizing radiation to patients and doctors. In addition, the ultrasonic probe is driven by the mechanical arm to be always positioned right above the target blood vessel, so that the quality of the blood vessel image in the operation acquired by the ultrasonic probe can be effectively ensured, and the blood vessel interventional operation navigation system provided by the invention can realize an accurate navigation function in the operation.
Because the vascular intervention navigation method, the electronic equipment and the readable storage medium provided by the invention belong to the same conception as the vascular intervention navigation system provided by the invention, the vascular intervention navigation method, the electronic equipment and the readable storage medium provided by the invention have all the advantages of the vascular intervention navigation system provided by the invention, so that the beneficial effects of the vascular intervention navigation method, the electronic equipment and the readable storage medium provided by the invention are not repeated one by one.
It should be noted that computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that the apparatus and methods disclosed in the embodiments herein may be implemented in other ways. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments herein. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments herein may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the present invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, the present invention is intended to include such modifications and alterations insofar as they come within the scope of the invention or the equivalents thereof.

Claims (14)

1. The vascular intervention operation navigation system is characterized by comprising an ultrasonic imaging device, an image acquisition device and a controller, wherein the ultrasonic imaging device and the image acquisition device are both in communication connection with the controller;
the image acquisition device is configured to acquire intra-operative patient pose images and transmit the patient pose images to the controller;
The ultrasonic imaging device comprises a mechanical arm and an ultrasonic probe mounted at the tail end of the mechanical arm, wherein the mechanical arm is configured to drive the ultrasonic probe to move to different ultrasonic observation positions, and the ultrasonic probe is configured to acquire an intraoperative blood vessel image at each ultrasonic observation position and transmit the intraoperative blood vessel image to the controller;
the controller is configured to implement the steps of:
matching the intra-operative patient pose image with the acquired preoperative blood vessel image to acquire intra-operative position information of a target blood vessel;
controlling the mechanical arm to drive the ultrasonic probe to move to an initial ultrasonic observation position corresponding to an initial section of the target blood vessel according to the intraoperative position information of the target blood vessel, and guiding a surgical instrument to penetrate into the target blood vessel according to an intraoperative blood vessel image acquired by the ultrasonic probe at the initial ultrasonic observation position; and
and controlling the mechanical arm to drive the ultrasonic probe to move according to the intra-operative blood vessel image acquired by the ultrasonic probe, so that each ultrasonic observation position of the ultrasonic probe is always positioned right above the target blood vessel and the surgical instrument is tracked in real time, and guiding the surgical instrument to travel along the target blood vessel until reaching a target area.
2. The vascular interventional procedure navigation system of claim 1, wherein the ultrasound imaging device further comprises a force sensor mounted on an ultrasound probe, the force sensor configured to collect contact force information between the ultrasound probe and a patient's body surface and transmit the contact force information to the controller, the controller further configured to control a robotic arm to drive the ultrasound probe to conform to the patient's body surface based on the contact force.
3. The vascular interventional procedure navigation system of claim 1, wherein the ultrasound imaging device further comprises a couplant delivery assembly mounted to a distal end of the robotic arm, the couplant delivery assembly configured to apply couplant at each ultrasound viewing location.
4. The vascular interventional procedure navigation system of claim 1, wherein the controller is configured to match the intra-operative patient pose image with the acquired pre-operative blood vessel image to acquire intra-operative location information of an access blood vessel by:
dividing the pose image of the patient in operation to obtain a body surface image of the patient in operation;
registering the body surface image of the patient in operation and the preoperative blood vessel image to obtain a first position transformation matrix;
And mapping the preoperative blood vessel center line acquired based on the preoperative blood vessel image to the body surface image of the intraoperative patient according to the first position transformation matrix so as to acquire intraoperative position information of the access blood vessel.
5. The vascular interventional procedure navigation system of claim 4, further comprising a laser projection device in communication with the controller, the controller further configured to project the pre-operative vascular centerline to a patient's body surface based on a mapping of the pre-operative vascular centerline in the intra-operative patient body surface image.
6. The vascular interventional procedure navigation system of claim 4, wherein the intra-operative patient pose image is a two-dimensional image and the pre-operative vascular image is a three-dimensional image;
the controller is configured to register the intra-operative patient body surface image and the pre-operative blood vessel image to obtain a first positional transformation matrix by:
reconstructing the preoperative blood vessel image according to preset digital image reconstruction parameters to obtain a corresponding two-dimensional preoperative simulated blood vessel image;
registering the body surface image of the patient in the operation and the two-dimensional preoperation simulated blood vessel image to obtain a first position transformation matrix;
The controller is configured to map the preoperative vessel centerline to the intra-operative patient body surface image by:
projecting the preoperative blood vessel center line into a two-dimensional preoperative simulated blood vessel center line according to the digital image reconstruction parameters;
and mapping the two-dimensional preoperative simulated vessel center line to the body surface image of the patient in operation according to the first position transformation matrix.
7. The vascular interventional procedure navigation system of claim 1, wherein the controller is further configured to implement the steps of:
segmenting the preoperative blood vessel image to obtain preoperative blood vessel segmentation results;
acquiring a preoperative blood vessel center line according to the preoperative blood vessel segmentation result; and
straightening the preoperative blood vessel segmentation result according to the preoperative blood vessel center line to obtain a preoperative straightened blood vessel map;
and calculating relevant parameters of the target blood vessel according to the preoperative straightened blood vessel graph.
8. The vascular interventional procedure navigation system of claim 1, wherein the controller is configured to control the robotic arm to drive the ultrasound probe to move by:
Step A, acquiring an intraoperative blood vessel image sequence acquired by the ultrasonic probe at a current ultrasonic observation position;
step B, acquiring two-dimensional position deviation information between a blood vessel center point at the current ultrasonic observation position and the ultrasonic probe center point and advancing information of the surgical instrument according to the intraoperative blood vessel image sequence;
step C, acquiring target position information of the mechanical arm according to the position information of the mechanical arm at the current ultrasonic observation position, the two-dimensional position deviation between the blood vessel center point and the ultrasonic probe center point and the advancing information of the surgical instrument;
and D, controlling the mechanical arm to drive the ultrasonic probe to move to the next ultrasonic observation position according to the target position information of the mechanical arm, and returning to execute the step A.
9. The vascular interventional procedure navigation system of claim 8, wherein the controller is further configured to implement the steps of:
acquiring intraoperative blood vessel center line point cloud data according to the position information of blood vessel center points at different ultrasonic observation positions, the position information of the surgical instrument and the position information of the mechanical arm;
Registering the intraoperative vessel centerline point cloud data with preoperative vessel centerline point cloud data acquired based on the preoperative vessel image to acquire a second position transformation matrix; and
and according to the second position transformation matrix and the current position information of the surgical instrument, the surgical instrument is displayed in a fusion mode on a preoperative blood vessel segmentation result acquired based on the preoperative blood vessel image.
10. The vascular interventional procedure navigation system of claim 1, wherein the ultrasound probe is further configured to acquire intra-operative target region images and transmit to the controller;
the controller is further configured to implement the steps of:
dividing the intra-operation target area image to obtain an intra-operation target area dividing result;
dividing the obtained preoperative target area image to obtain a preoperative target area dividing result;
registering the intra-operation target region segmentation result and the pre-operation target region segmentation result to obtain a third position transformation matrix;
and mapping the preoperative target region segmentation result to the intra-operative target region image according to the third position transformation matrix so as to perform fusion display.
11. The vascular interventional procedure navigation system of claim 10, wherein the controller is further configured to implement the steps of:
acquiring a center line of the preoperative target area according to the segmentation result of the preoperative target area;
straightening the pre-operation target region segmentation result according to the pre-operation target region center line to obtain a pre-operation straightening target region diagram;
and calculating relevant parameters of the target area according to the preoperative straightened target area graph.
12. A navigation method for vascular interventional procedures, comprising:
acquiring a pose image and a preoperative blood vessel image of a patient in operation;
matching the intra-operative patient pose image with the preoperative blood vessel image to obtain intra-operative position information of a target blood vessel;
controlling a mechanical arm to drive an ultrasonic probe to move to an initial ultrasonic observation position corresponding to an initial section of the target blood vessel according to the intraoperative position information of the target blood vessel, and guiding a surgical instrument to penetrate into the target blood vessel according to an intraoperative blood vessel image acquired by the ultrasonic probe at the initial ultrasonic observation position; and
and controlling the mechanical arm to drive the ultrasonic probe to move according to the intra-operative blood vessel image acquired by the ultrasonic probe, so that each ultrasonic observation position of the ultrasonic probe is always positioned right above the target blood vessel and the surgical instrument is tracked in real time, and guiding the surgical instrument to travel along the target blood vessel until reaching a target area.
13. An electronic device comprising a processor and a memory, the memory having stored thereon a computer program which, when executed by the processor, implements the vascular interventional procedure navigation method of claim 12.
14. A readable storage medium, wherein a computer program is stored in the readable storage medium, which, when executed by a processor, implements the vascular interventional procedure navigation method of claim 12.
CN202310140813.XA 2023-02-15 2023-02-15 Vascular intervention navigation system, method, electronic device and readable storage medium Pending CN116158849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310140813.XA CN116158849A (en) 2023-02-15 2023-02-15 Vascular intervention navigation system, method, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310140813.XA CN116158849A (en) 2023-02-15 2023-02-15 Vascular intervention navigation system, method, electronic device and readable storage medium

Publications (1)

Publication Number Publication Date
CN116158849A true CN116158849A (en) 2023-05-26

Family

ID=86419639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310140813.XA Pending CN116158849A (en) 2023-02-15 2023-02-15 Vascular intervention navigation system, method, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN116158849A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116965930A (en) * 2023-09-22 2023-10-31 北京智愈医疗科技有限公司 Ultrasonic image-based surgical instrument displacement monitoring device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116965930A (en) * 2023-09-22 2023-10-31 北京智愈医疗科技有限公司 Ultrasonic image-based surgical instrument displacement monitoring device
CN116965930B (en) * 2023-09-22 2023-12-22 北京智愈医疗科技有限公司 Ultrasonic image-based surgical instrument displacement monitoring device

Similar Documents

Publication Publication Date Title
JP7093801B2 (en) A system that facilitates position adjustment and guidance during surgery
CN107809955B (en) Real-time collimation and ROI-filter localization in X-ray imaging via automatic detection of landmarks of interest
US20180158201A1 (en) Apparatus and method for registering pre-operative image data with intra-operative laparoscopic ultrasound images
US8126241B2 (en) Method and apparatus for positioning a device in a tubular organ
US6533455B2 (en) Method for determining a coordinate transformation for use in navigating an object
US8315355B2 (en) Method for operating C-arm systems during repeated angiographic medical procedures
Song et al. Locally rigid, vessel-based registration for laparoscopic liver surgery
CN105520716B (en) Real-time simulation of fluoroscopic images
US10405817B2 (en) X-ray image diagnosis apparatus and medical system
JP2012115635A (en) Image processing method, image processing apparatus, imaging system, and program code
JP2016514531A (en) Device location in guided high-dose rate brachytherapy
US10111717B2 (en) System and methods for improving patent registration
CN116158849A (en) Vascular intervention navigation system, method, electronic device and readable storage medium
WO2013157457A1 (en) X-ray image capturing device, medical image processing device, x-ray image capturing method, and medical image processing method
CN109152929B (en) Image-guided treatment delivery
Cheng et al. An augmented reality framework for optimization of computer assisted navigation in endovascular surgery
US8467850B2 (en) System and method to determine the position of a medical instrument
CN112053346A (en) Method and system for determining operation guide information
CN117357250A (en) Fusion method of X-ray image and three-dimensional mapping image and interventional operation system
JP2017538497A (en) Adaptive planning and delivery of high dose rate brachytherapy
WO2008050316A2 (en) Method and apparatus for positioning a therapeutic device in a tubular organ dilated by an auxiliary device balloon
EP4275639A1 (en) System and method for assistance in a surgical procedure
US20220301100A1 (en) Providing a corrected dataset
CN115089294B (en) Interventional operation navigation method
US20220354588A1 (en) Method and system for providing a corrected dataset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination