CN115998423A - Display method for simulated ablation and ultrasonic imaging system - Google Patents

Display method for simulated ablation and ultrasonic imaging system Download PDF

Info

Publication number
CN115998423A
CN115998423A CN202111233973.6A CN202111233973A CN115998423A CN 115998423 A CN115998423 A CN 115998423A CN 202111233973 A CN202111233973 A CN 202111233973A CN 115998423 A CN115998423 A CN 115998423A
Authority
CN
China
Prior art keywords
dimensional
focus
space
image
ablation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111233973.6A
Other languages
Chinese (zh)
Inventor
于开欣
丛龙飞
江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN202111233973.6A priority Critical patent/CN115998423A/en
Publication of CN115998423A publication Critical patent/CN115998423A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A display method of simulated ablation and an ultrasound imaging system, the method comprising: acquiring a two-dimensional ultrasonic image of a focus in real time, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result; according to the registration result, generating a three-dimensional model simulating an ablation focus and a three-dimensional model simulating a focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and a three-dimensional image space where the preoperative three-dimensional image is located; displaying a two-dimensional ultrasonic image in a first display window, and generating a three-dimensional model simulating an ablation focus and a three-dimensional model simulating a focus in an ultrasonic image space, wherein the display angle of the two-dimensional ultrasonic image is fixed; and displaying the three-dimensional model simulating the ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space in a second display window, wherein the position of the three-dimensional model of the focus is fixed. The method and the device can clearly display the two-dimensional ultrasonic image while presenting the relative position relation between the three-dimensional model simulating the ablation focus and the three-dimensional model of the focus.

Description

Display method for simulated ablation and ultrasonic imaging system
Technical Field
The present application relates to the field of ultrasound imaging technology, and more particularly to a display method for simulated ablation and an ultrasound imaging system.
Background
The real-time ultrasound guided percutaneous puncture ablation interventional therapy is increasingly important in tumor treatment, and has the advantages of high curative effect, small invasion, quick postoperative recovery and the like. The prior ablation interventional therapy is mainly conducted under the guidance of a two-dimensional ultrasonic image, namely a doctor finds the approximate position of a focus region through a real-time ultrasonic image or an ultrasonic contrast image, roughly estimates the two-dimensional surface where the maximum diameter of a focus is located, and formulates an ablation scheme based on the two-dimensional image to conduct ablation.
With the development of new technology, three-dimensional reconstruction of focus images can be performed by using computer 3D reconstruction software and image processing technology, so as to realize three-dimensional visualization of focuses. The three-dimensional visualized focus image can provide a region where the two-dimensional image is difficult to display and obtain objective anatomical information, and has the characteristics of accuracy, liveness and reality. The three-dimensional display can intuitively, clearly and randomly display the position relation between the focus and surrounding tissues, and the ultrasonic auxiliary ablation interventional system based on the three-dimensional display and positioning device enables doctors to intuitively perform operation planning on the focus, optimize an operation scheme and improve operation skills, so that operation safety is improved. The current three-dimensional display scheme is that the spatial position of a three-dimensional model of a focus is fixed, the spatial position of an ultrasonic sector changes in real time along with the movement of an ultrasonic probe, but in actual ablation operation, an ultrasonic image is required to be used as a standard, and the real-time change of the position of the ultrasonic sector causes that a three-dimensional display window cannot clearly display the image content of the current ultrasonic sector, so that the operation difficulty and the workload of doctors are increased.
Disclosure of Invention
In the summary, a series of concepts in a simplified form are introduced, which will be further described in detail in the detailed description. The summary of the present application is not intended to define the key features and essential features of the claimed subject matter, nor is it intended to be used to determine the scope of the claimed subject matter.
An embodiment of the present application provides a display method for simulating ablation, where the method includes: acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result; according to the registration result, simultaneously generating a three-dimensional model simulating an ablation focus and a three-dimensional model of a focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model of the focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters; displaying the two-dimensional ultrasonic image, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasonic image space in a first display window, wherein the display angle of the two-dimensional ultrasonic image in the first display window is fixed; mapping the two-dimensional ultrasonic image from the ultrasonic image space to the three-dimensional image space according to the registration result, and displaying the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space and the two-dimensional ultrasonic image mapped to the three-dimensional image space in a second display window, wherein the positions of the three-dimensional model of the focus in the second display window are fixed.
In one embodiment, the ultrasound probe has a first spatial localization device, the registering the two-dimensional ultrasound image with a preoperative three-dimensional image of a lesion, comprising: registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of a focus according to the space positioning information obtained by the first space positioning device so as to obtain a transformation relationship between the ultrasonic image space and the three-dimensional image space.
In one embodiment, the registering the two-dimensional ultrasound image with the preoperative three-dimensional image of the lesion based on the spatial localization information obtained by the first spatial localization device comprises: matching the two-dimensional ultrasonic image with a two-dimensional section in the preoperative three-dimensional image to obtain a matched section of the two-dimensional ultrasonic image in the preoperative three-dimensional image; obtaining a coordinate transformation matrix of the two-dimensional ultrasonic image and the matching tangent plane according to the coordinates of the same characteristic points in the two-dimensional ultrasonic image and the coordinates in the matching tangent plane; obtaining a transformation relation between the three-dimensional image space and the world coordinate space according to the coordinate transformation matrix and the space positioning information; and obtaining the transformation relation between the ultrasonic image space and the three-dimensional image space according to the transformation relation between the three-dimensional image space and the world coordinate space.
In one embodiment, the obtaining the transformation relation between the three-dimensional image space and the world coordinate space according to the coordinate transformation matrix and the space positioning information includes: acquiring a transformation relation between the ultrasonic image space and the space of a first space positioning device and a transformation relation between the space of the first space positioning device and the world coordinate space; obtaining a transformation relationship between the ultrasonic image space and the world coordinate space according to the transformation relationship between the ultrasonic image space and the space of the first space positioning device and the transformation relationship between the space of the first space positioning device and the world coordinate space; and obtaining the transformation relation between the three-dimensional image space and the world coordinate space according to the coordinate transformation matrix and the transformation relation between the ultrasonic image space and the world coordinate space.
In one embodiment, the pre-operative three-dimensional image includes a pre-operative three-dimensional ultrasound image obtained by three-dimensional ultrasound imaging of the lesion with the ultrasound probe, and the registering the two-dimensional ultrasound image with the pre-operative three-dimensional image of the lesion includes: and obtaining the transformation relation between the ultrasonic image space and the three-dimensional image space according to the positioning information acquired by the first space positioning device.
In one embodiment, generating a three-dimensional model of a simulated ablation focus from the simulated ablation parameters includes: obtaining the position of the simulated ablation focus in the ultrasonic image space according to the angle of a puncture frame arranged on the ultrasonic probe and the simulated ablation depth; and obtaining the position of the simulated ablation focus in the three-dimensional image space according to the position of the simulated ablation focus in the ultrasonic image space and the corresponding relation between the ultrasonic image space and the three-dimensional image space.
In one embodiment, the obtaining the position of the simulated ablation focus in the ultrasonic image space according to the angle of a puncture frame arranged on the ultrasonic probe and the simulated ablation depth comprises: determining the position of the center point of the simulated ablation range in the ultrasonic image space according to the angle of the puncture frame and the simulated ablation depth; and generating a three-dimensional model of the simulated ablation focus in the ultrasonic image space according to the position of the central point of the simulated ablation focus in the ultrasonic image space and the size of the simulated ablation focus.
In one embodiment, generating a three-dimensional model of a simulated ablation focus from the simulated ablation parameters includes: and determining the position of the simulated ablation stove according to a second space positioning device arranged on the ablation needle, and generating a three-dimensional model of the simulated ablation stove according to the position of the simulated ablation stove.
In one embodiment, the determining the location of the simulated ablation focus from a second spatial positioning device disposed on the ablation needle comprises: obtaining the position of the simulated ablation stove in the space of the second space positioning device according to the second space positioning device; determining the position of the simulated ablation stove in the three-dimensional image space according to the corresponding relation between the space of the second space positioning device and the world coordinate space and the corresponding relation between the world coordinate space and the three-dimensional image space; and determining the position of the simulated ablation stove in the ultrasonic image space according to the corresponding relation between the space of the second space positioning device and the world coordinate space and the corresponding relation between the world coordinate space and the ultrasonic image space.
In one embodiment, the method further comprises: and when a confirmation instruction of a user is received, controlling the ablation needle to ablate the focus at the current position of the ablation needle.
In one embodiment, the method further comprises: displaying images of the ultrasonic probe and images of the simulated ablation needle connected with the simulated ablation range in the first display window and the second display window.
A second aspect of embodiments of the present application provides a display method for simulating ablation, the method including: acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result; according to the registration result, simultaneously generating a three-dimensional model simulating an ablation focus and a three-dimensional model of a focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model of the focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters; displaying the two-dimensional ultrasonic image, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasonic image space in a first display window, wherein the display angle of the two-dimensional ultrasonic image in the first display window is fixed; and displaying the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space in a second display window, wherein the three-dimensional model of the focus in the second display window is fixed in position.
A third aspect of embodiments of the present application provides a display method for simulating ablation, the method including: acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result; according to the registration result, generating at least a three-dimensional model simulating an ablation focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and generating at least a three-dimensional model simulating a focus in a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model simulating the ablation focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulated ablation parameters; displaying the two-dimensional ultrasonic image and the three-dimensional model of the simulated ablation focus generated in the ultrasonic image space in a first display window, wherein the display angle of the two-dimensional ultrasonic image in the first display window is fixed; and displaying the three-dimensional model of the focus generated in the three-dimensional image space in a second display window, wherein the position of the three-dimensional model of the focus in the second display window is fixed.
A fourth aspect of the present application provides a display method for simulating ablation, the method including: acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result; according to the registration result, simultaneously generating a three-dimensional model simulating an ablation focus and a three-dimensional model of a focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model of the focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters; displaying the two-dimensional ultrasonic image, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasonic image space, and the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space in a first display window, wherein the display angle of the two-dimensional ultrasonic image in the first display window is fixed.
A fifth aspect of embodiments of the present application provides an ultrasound imaging system comprising: an ultrasonic probe; a transmitting circuit for exciting the ultrasonic probe to transmit ultrasonic waves to a target tissue; a receiving circuit for controlling the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave; and the processor is used for executing the steps of the display method for simulating ablation.
According to the display method and the ultrasonic imaging system for simulating ablation, which are disclosed by the embodiment of the application, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus are respectively generated in the ultrasonic image space and the three-dimensional image space, so that a doctor can see the position relation between an ultrasonic sector and the simulated ablation focus relative to the focus under the condition that the position of the three-dimensional image space, namely the three-dimensional model of the focus, is unchanged; but also can see the displayed image content of the current ultrasonic sector when the ultrasonic image space, i.e. the position of the ultrasonic sector is fixed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
In the drawings:
FIG. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present application;
FIG. 2 shows a schematic flow chart of a display method of simulated ablation according to an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of a spatial transformation relationship according to an embodiment of the present application;
FIG. 4 shows a schematic diagram of a display interface according to an embodiment of the present application;
FIG. 5 shows a schematic flow chart of a display method of simulated ablation according to another embodiment of the application;
FIG. 6 shows a schematic flow chart of a display method of simulated ablation according to another embodiment of the application;
fig. 7 shows a schematic flow chart of a display method of simulated ablation according to a further embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail below with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. Based on the embodiments of the present application described herein, all other embodiments that may be made by one skilled in the art without the exercise of inventive faculty are intended to fall within the scope of protection of the present application.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced without one or more of these details. In other instances, some features well known in the art have not been described in order to avoid obscuring the present application.
It should be understood that the present application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
For a thorough understanding of the present application, detailed structures will be presented in the following description in order to illustrate the technical solutions presented herein. Alternative embodiments of the present application are described in detail below, however, the present application may have other implementations in addition to these detailed descriptions.
Next, an ultrasound imaging system according to an embodiment of the present application is described first with reference to fig. 1, fig. 1 showing a schematic block diagram of an ultrasound imaging system 100 according to an embodiment of the present application.
As shown in fig. 1, the ultrasound imaging system 100 includes an ultrasound probe 110, transmit circuitry 112, receive circuitry 114, a processor 116, and a display 118. Further, the ultrasound imaging system may further include a transmit/receive selection switch 120 and a beam synthesis module 122, and the transmit circuit 112 and the receive circuit 114 may be connected to the ultrasound probe 110 through the transmit/receive selection switch 120.
The ultrasonic probe 110 includes a plurality of transducer elements, and the plurality of transducer elements may be arranged in a row to form a linear array or in a two-dimensional matrix to form an area array, and the plurality of transducer elements may also form a convex array. The transducer array elements are used for transmitting ultrasonic waves according to the excitation electric signals or converting received ultrasonic waves into electric signals, so that each transducer array element can be used for realizing the mutual conversion of electric pulse signals and ultrasonic waves, thereby realizing the transmission of ultrasonic waves to tissues of a target area of a tested object, and also can be used for receiving ultrasonic wave echoes reflected by the tissues. In the ultrasonic detection, the transmission sequence and the receiving sequence can control which transducer array elements are used for transmitting ultrasonic waves and which transducer array elements are used for receiving ultrasonic waves, or control the transducer array elements to be used for transmitting ultrasonic waves or receiving echo waves in a time slot mode. The transducer array elements participating in ultrasonic wave transmission can be excited by the electric signals at the same time, so that ultrasonic waves are transmitted at the same time; alternatively, the transducer elements involved in the transmission of the ultrasound beam may also be excited by several electrical signals with a certain time interval, so as to continuously transmit ultrasound waves with a certain time interval.
During ultrasound imaging, the transmit circuit 112 transmits the delay-focused transmit pulse to the ultrasound probe 110 through the transmit/receive selection switch 120. The ultrasonic probe 110 is excited by the emission pulse to emit an ultrasonic beam to the tissue of the target region of the object to be measured, receives the ultrasonic echo with the tissue information reflected from the tissue of the target region after a certain delay, and reconverts the ultrasonic echo into an electrical signal. The receiving circuit 114 receives the electrical signals converted by the ultrasonic probe 110, obtains ultrasonic echo signals, and sends the ultrasonic echo signals to the beam forming module 122, and the beam forming module 122 performs focusing delay, weighting, channel summation and other processes on the ultrasonic echo data, and then sends the ultrasonic echo signals to the processor 116. The processor 116 performs signal detection, signal enhancement, data conversion, logarithmic compression, etc. on the ultrasonic echo signals to form an ultrasonic image. The ultrasound images obtained by the processor 116 may be displayed on the display 118 or may be stored in the memory 124.
Alternatively, the processor 116 may be implemented as software, hardware, firmware, or any combination thereof, and may use single or multiple application specific integrated circuits (Application Specific Integrated Circuit, ASIC), single or multiple general purpose integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or any combination of the foregoing circuits and/or devices, or other suitable circuits or devices. Also, the processor 116 may control other components in the ultrasound imaging system 100 to perform the respective steps of the methods in the various embodiments in this specification.
The display 118 is connected with the processor 116, and the display 118 may be a touch display screen, a liquid crystal display screen, or the like; alternatively, the display 118 may be a stand-alone display such as a liquid crystal display, television, or the like that is independent of the ultrasound imaging system 100; alternatively, the display 118 may be a display screen of an electronic device such as a smart phone, tablet, or the like. Wherein the number of displays 118 may be one or more.
The display 118 may display the ultrasound image obtained by the processor 116. In addition, the display 118 may provide a graphical interface for human-computer interaction while displaying the ultrasonic image, one or more controlled objects are provided on the graphical interface, and the user is provided with an operation instruction input by using the human-computer interaction device to control the controlled objects, so as to execute corresponding control operation. For example, icons are displayed on a graphical interface that can be manipulated using a human-machine interaction device to perform specific functions, such as drawing a region of interest box on an ultrasound image, etc.
Optionally, the ultrasound imaging system 100 may further include other human-machine interaction devices in addition to the display 118, which are coupled to the processor 116, for example, the processor 116 may be coupled to the human-machine interaction device through an external input/output port, which may be a wireless communication module, a wired communication module, or a combination of both. The external input/output ports may also be implemented based on USB, bus protocols such as CAN, and/or wired network protocols, among others.
The man-machine interaction device may include an input device for detecting input information of a user, and the input information may be, for example, a control instruction for an ultrasonic wave transmission/reception timing, an operation input instruction for drawing a point, a line, a frame, or the like on an ultrasonic image, or may further include other instruction types. The input device may include one or more of a keyboard, mouse, scroll wheel, trackball, mobile input device (e.g., a mobile device with a touch display, a cell phone, etc.), multi-function knob, etc. The human-machine interaction means may also comprise an output device such as a printer.
The ultrasound imaging system 100 may also include a memory 124 for storing instructions for execution by the processor, storing received ultrasound echoes, storing ultrasound images, and so forth. The memory may be a flash memory card, solid state memory, hard disk, or the like. Which may be volatile memory and/or nonvolatile memory, removable memory and/or non-removable memory, and the like.
It should be understood that the components included in the ultrasound imaging system 100 shown in fig. 1 are illustrative only and may include more or fewer components. The present application is not limited thereto.
The method for displaying simulated ablation according to the embodiment of the present application is described below with reference to fig. 2, and fig. 2 is a schematic flowchart of a method 200 for displaying simulated ablation according to the embodiment of the present application. Specifically, the display method 200 for simulated ablation according to the embodiment of the present application includes the following steps:
in step S210, acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result;
in step S220, according to the registration result, generating a three-dimensional model simulating an ablation focus and a three-dimensional model simulating a focus in an ultrasound image space where the two-dimensional ultrasound image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model simulating the ablation focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters;
in step S230, displaying the two-dimensional ultrasound image, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasound image space in a first display window, wherein the display angle of the two-dimensional ultrasound image in the first display window is fixed;
In step S240, the two-dimensional ultrasound image is mapped from the ultrasound image space to the three-dimensional image space according to the registration result, and the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space, and the two-dimensional ultrasound image mapped to the three-dimensional image space are displayed in a second display window, the position of the three-dimensional model of the focus in the second display window being fixed.
The display method 200 for simulating ablation in the embodiment of the application is performed on the basis of fusion of a real-time two-dimensional ultrasonic image and a pre-acquired three-dimensional image, and the principle of image fusion is that a corresponding relationship between the real-time two-dimensional ultrasonic image and the pre-acquired three-dimensional image is established through a space positioning device of an ultrasonic probe so as to fuse and display the two images. After the image fusion, three-dimensional visualization of a focus three-dimensional model and a simulated ablation focus under the same space coordinate system needs to be realized, and the position relationship of the focus three-dimensional model and the simulated ablation focus changes in real time along with the movement of an ultrasonic probe. Because the three-dimensional model simulating the ablation focus and the three-dimensional model of the focus are respectively drawn in the ultrasonic image space and the three-dimensional image space, a doctor can see the position relation of the ultrasonic sector and the simulated ablation focus relative to the focus under the condition that the position of the three-dimensional image space, namely the three-dimensional model of the focus, is unchanged; but also can see the displayed image content of the current ultrasonic sector when the display angle position of the ultrasonic image space, namely the two-dimensional ultrasonic image is fixed.
In step S210, prior to an ablation procedure, an ultrasound probe having a first spatial localization device, such as a localization sensor, bound to the ultrasound probe, is controlled to scan a lesion to obtain a two-dimensional ultrasound image. The focus can be a tumor of target tissue of a subject, the subject is a patient needing to perform an ablation operation, and the target tissue of the subject can be various pathological organs such as liver, stomach, lung, pancreas, thyroid, breast, intestinal tract and the like.
Illustratively, in connection with fig. 1, in step S210, the ultrasound probe 110 may be activated by the transmission/reception selection switch 120 to transmit ultrasound waves to the lesion of the object under test at regular time via the transmission circuit 112, receive ultrasound echoes returned from the lesion of the object under test via the reception circuit 114 by the ultrasound probe 110, and convert into ultrasound echo signals. The beam forming module 122 performs signal beam forming processing such as focusing delay, weighting and channel summation on the ultrasonic echo signals, and then sends the ultrasonic echo data after beam forming to the processor 118 for signal detection, signal enhancement, data conversion, logarithmic compression and other processing, so as to obtain a two-dimensional ultrasonic image. The two-dimensional ultrasound image is a gray-scale image, i.e., a B-mode ultrasound image.
After acquiring a two-dimensional ultrasonic image acquired in real time, registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result.
The preoperative three-dimensional image of the lesion may be acquired by a medical imaging device such as a computed tomography device (CT, computed Tomography), a magnetic resonance imaging device (MRI, magnetic Resonance Imaging), a positron emission tomography device (PET, positron Emission Tomography), a digital X-ray imaging device, an ultrasound device, a digital image subtraction device (DSA, digital Subtraction Angiography), an optical imaging device, and the like. Because three-dimensional reconstruction takes a long time, preoperative three-dimensional images of the lesion are acquired in advance of the operation.
Illustratively, the user may pre-introduce the preoperative three-dimensional image of the lesion into the ultrasound imaging system prior to initiating ultrasound imaging, including but not limited to via a storage medium such as a USB flash disk, optical disk, or via network transmission.
The three-dimensional image of the focus acquired before operation and the two-dimensional ultrasonic image acquired in real time are registered, so that the space information of the three-dimensional image can be utilized, and the real-time performance of the two-dimensional ultrasonic image is realized. And registering the two-dimensional ultrasonic image and the preoperative three-dimensional image, namely searching for a space transformation relation between the two-dimensional ultrasonic image and the preoperative three-dimensional image, so that the geometric relations of the corresponding points of the two-dimensional ultrasonic image and the preoperative three-dimensional image are in one-to-one correspondence. Registration may include rigid or non-rigid registration.
Specifically, in the ultrasonic scanning process, the first space positioning device fixed on the ultrasonic probe continuously provides position information along with the movement of the ultrasonic probe, the 6-degree-of-freedom space orientation of the ultrasonic probe can be obtained through the magnetic positioning controller, and the two-dimensional ultrasonic image and the preoperative three-dimensional image can be registered by utilizing the image information and the magnetic positioning information. The processor of the ultrasonic imaging system can be connected with a first space positioning device arranged on the ultrasonic probe in a wired or wireless mode so as to acquire the position information of the ultrasonic probe. The first spatial positioning device may use any type of structure or principle such as an optical positioning sensor or a magnetic field positioning sensor to position the ultrasonic probe.
The spatial transformation relationship between the two-dimensional ultrasound image and the preoperative three-dimensional image is shown in fig. 3, and is expressed as a formula:
T sec =P·R probe ·A·T us (equation 1)
Wherein T is us Is the coordinates of the point in the ultrasound image space, T sec Is the coordinates of the corresponding point in the preoperative three-dimensional image space; a is the ultrasound image space (coordinates expressed as X us ,Y us ,Z us ) Space to first spatial location means (coordinates expressed as X sensor ,Y sensor ,Z sensor ) Coordinate transformation relation of R probe Is the space-to-world coordinate space (coordinate is denoted as X) of the space-locating device MG ,Y MG ,Z MG ) P is the coordinate transformation relationship of the world coordinate system to the space of the preoperative three-dimensional image.
Therefore, the coordinate transformation matrix of the ultrasonic image space and the preoperative three-dimensional image space is P.R probe A. In the ultrasonic imaging process, the first space positioning device is fixed on the ultrasonic probe, when the model of the ultrasonic probe is unchanged, the A is fixed, and the A can be determined by a calibration method before registration. R is R probe Directly read by a magnetic positioning controller, and R is along with the movement of an ultrasonic probe probe And constantly changing. Thus, the spatial positioning information (i.e. R) obtainable from the first spatial positioning means probe ) Registering the two-dimensional ultrasonic image with the preoperative three-dimensional image of the focus to obtain a transformation relationship between the ultrasonic image space and the three-dimensional image space.
In one embodiment, P is calculated from the result of image registration (i.e., the coordinate transformation relationship between the two-dimensional ultrasound image and the preoperative three-dimensional image), if the image registration result of the two-dimensional ultrasound image space and the preoperative three-dimensional image is M, then:
P=M·A -1 ·R probe -1 (equation 2)
That is, in order to obtain the transformation relationship between the ultrasonic image space and the three-dimensional image space, the two-dimensional ultrasonic image and the two-dimensional tangent plane in the three-dimensional image before operation are first required to be matched to obtain the matched tangent plane of the two-dimensional ultrasonic image in the three-dimensional image before operation, and the coordinate transformation matrix M of the two-dimensional ultrasonic image and the matched tangent plane is obtained according to the coordinates of the same feature point in the two-dimensional ultrasonic image and the coordinates in the matched tangent plane. According to the transformation relation A between the ultrasonic image space and the space of the first space positioning device and the transformation relation R between the space of the first space positioning device and the world coordinate space probe Can obtain the transformation relation (R probe A); based on the coordinate transformation matrix M and the transformation relationship between the ultrasound image space and the world coordinate space, the transformation relationship P between the three-dimensional image space and the world coordinate space can be obtained based on the above formula 2.
The image registration method for matching the two-dimensional ultrasound image with the two-dimensional section in the preoperative three-dimensional image, which is adopted in the embodiment of the application, can comprise automatic registration, interactive registration, manual registration or any combination of the three modes. Registration may include anatomical feature-based registration or geometric feature-based registration, pixel gray correlation-based registration, external positioning marker-based registration, and the like. Registration may also include any other suitable registration means.
In one embodiment, registering the two-dimensional ultrasound image with the three-dimensional image specifically includes: matching the two-dimensional ultrasonic image with a two-dimensional section in the preoperative three-dimensional image to obtain a matched section of the two-dimensional ultrasonic image in the preoperative three-dimensional image; and obtaining a coordinate transformation matrix of the two-dimensional ultrasonic image and the matching tangent plane according to the coordinates of the same characteristic points in the two-dimensional ultrasonic image and the coordinates in the matching tangent plane. The matching operation can be performed manually by a user, that is, the manual alignment operation of the user is received, so that the two-dimensional ultrasonic image is aligned with the corresponding section in the preoperative three-dimensional image, and the matching section of the two-dimensional ultrasonic image in the preoperative three-dimensional image is obtained.
In another embodiment, the two-dimensional ultrasound image may be identified as the same tissue in the preoperative three-dimensional image for automatic alignment. When the target site is a liver site, the same tissue identified is, for example, a blood vessel, a liver capsule, or the like. After the two-dimensional ultrasonic image and the preoperative three-dimensional image are aligned, a coordinate transformation relationship can be calculated according to the coordinates of the coincident points.
In other embodiments, feature points in the two-dimensional ultrasound image and the preoperative three-dimensional image may be first determined, where the feature points generally have some of translational invariance, rotational invariance, scale invariance, insensitivity to illumination, insensitivity to model, etc., and the nature of the feature points is determined by a feature point extraction method. Features of the feature points are then extracted, which may be generated from neighborhood gradient histograms, neighborhood autocorrelation, gray scale, etc. And then, matching the characteristic points of the two-dimensional ultrasonic image with the characteristic points of the preoperative three-dimensional ultrasonic image, and calculating a coordinate transformation relation based on the matched characteristic points.
In addition, the position of the external marker in the preoperative three-dimensional image can be identified, and the position of the external marker in the ultrasonic image space can be determined based on magnetic navigation so as to perform automatic alignment. The external markers are, for example, one or more metal markers arranged on the body surface of a patient, obvious light spots are formed in a preoperative three-dimensional image, the positions of the markers are further obtained, a positioning sensor is arranged on an ultrasonic probe in the process of scanning an ultrasonic image, the positions of the metal markers can be obtained through the positioning sensor, the markers in the preoperative three-dimensional ultrasonic image and the markers in the two-dimensional ultrasonic image are aligned, and the registration of the two-dimensional ultrasonic image and the preoperative three-dimensional image can be achieved.
When the preoperative three-dimensional image is a preoperative three-dimensional ultrasonic image obtained by performing three-dimensional ultrasonic imaging on a focus by using an ultrasonic probe, registering the two-dimensional ultrasonic image with the preoperative three-dimensional image of the focus, comprising: and obtaining a transformation relation P between the ultrasonic image space and the three-dimensional image space according to the positioning information acquired by the first space positioning device. The three-dimensional ultrasonic imaging comprises Freehand three-dimensional ultrasonic imaging, and the position information can be obtained when the Freehand three-dimensional ultrasonic imaging is performed in the imaging process, so that P can be automatically obtained, and the speed of image registration is improved.
In one embodiment, the ablation operation of the soft tissue of the abdomen such as the liver, the lung and the like causes the position deviation of the soft tissue and the focus part due to the influence of the breathing motion of the patient, so that a breathing correction function is introduced in the registration process to carry out breathing correction. As shown in fig. 3, the added T (T) is a spatial mapping manner for respiratory correction, and the spatial transformation relationship between the two-dimensional ultrasonic image and the preoperative three-dimensional image is expressed as a formula:
T Sec =T(t)·P·R probe ·A·T us (equation 3)
In addition, the position offset caused by the breathing motion can be corrected by adopting a mode of enabling the patient to breathe smoothly and the like.
In actual operation, a doctor firstly guides a preoperative three-dimensional image into an ultrasonic imaging system before registration, then the doctor scans target tissues by using an ultrasonic probe, freezes a two-dimensional ultrasonic image if a focus appears in the scanned image, searches a two-dimensional section corresponding to the two-dimensional ultrasonic image in the preoperative three-dimensional image, and registers the frozen two-dimensional ultrasonic image with the corresponding two-dimensional section of the preoperative three-dimensional image.
In step S220, according to the registration result, a three-dimensional model simulating an ablation focus and a three-dimensional model simulating a focus are generated in an ultrasound image space where a two-dimensional ultrasound image is located and a three-dimensional image space where a preoperative three-dimensional image is located, wherein the three-dimensional model simulating the ablation focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulated ablation parameters.
In embodiments of the present application, the step of generating a three-dimensional model of the lesion may be performed prior to the ultrasound scan. Specifically, any suitable method may be used to segment the lesion in the preoperative three-dimensional image and reconstruct a three-dimensional model of the lesion according to the segmentation result; segmentation methods include, but are not limited to, automatic segmentation, manual segmentation, or interactive segmentation. Illustratively, the automatic segmentation method may employ one or more of a random walk model, region growing, graph cut algorithm, pattern recognition, markov field, adaptive threshold, and the like. Manual segmentation involves the user delineating the edges of the lesion on multiple two-dimensional cuts of the preoperative three-dimensional image, interpolating between every two layers of edges, or delineating the edges of the lesion on each two-dimensional cut, and generating a three-dimensional contour of the lesion based on these two-dimensional edges. The interactive segmentation method adds user interaction as algorithm input in the segmentation process, so that objects with high-level semantics in the image can be completely extracted. For example, a preliminary segmentation limit may be selected by the user box; then, three-dimensional contours of the lesion are automatically segmented within the preliminary segmentation zone. For another example, some points or lines may be drawn by the user in the preliminary segmentation range, the points or lines drawn by the user are obtained as input by the interactive segmentation algorithm, and a weighted graph of similarity between each pixel point and the foreground or background is automatically established, and the foreground and the background are distinguished by solving the minimum cut, thereby determining the three-dimensional contour of the focus.
Then, surface reconstruction is performed according to the three-dimensional contour obtained by segmentation, so as to generate a three-dimensional model of the focus. The method can be used for reconstructing the surface of the focus structure from the three-dimensional data of the focus, namely reconstructing the surface of the focus structure according to the segmentation result and the contour line, and generating a focus three-dimensional entity with reality by using a reasonable illumination model and a texture mapping method. The surface drawing algorithm can adopt a Marching Cube (iso-surface extraction) algorithm, which essentially treats a series of two-dimensional slice data as a three-dimensional data field, extracts substances with certain threshold values from the three-dimensional data field, and connects the substances into triangular patches in a certain topological form. The basic idea of the Maring Cube algorithm is to process each voxel in a volume data field one by one, and determine the construction form of an isosurface inside the voxel according to the value of each vertex of the voxel. In the algorithm implementation process, the contour surface construction in the voxel is calculated as follows: calculating an approximation isosurface of a triangular patch in the voxel; and calculating normal vectors of all vertexes of the triangular patch. After calculating the vertex value, comparing the vertex energy value with the set energy threshold value, wherein the vertex value is smaller than the threshold value and is set as an external point 1, and the vertex value is larger than the threshold value, which means that the point is positioned in the ellipsoid and is set as 0.
Volume rendering is a technique that produces two-dimensional images on a screen directly from three-dimensional data fields. The digital image corresponds to a two-dimensional array of color and intensity describing data elements, referred to as pixels, and similarly a three-dimensional data field may be described by a three-dimensional array of corresponding values, referred to as voxels. Similar to the two-dimensional raster of a digital image, the volume data field can be seen as a three-dimensional raster. A typical three-dimensional data field is a medical image three-dimensional data field, a series of medical image slice data are obtained, and then the slice data are subjected to regularization processing according to position and angle information, so that a regular data field consisting of uniform grids in a three-dimensional space is formed, each node on the grids is a voxel, and attribute information such as object density is described. The biggest advantage of volume rendering techniques is that the internal structure of an object can be explored, and very shaped objects, such as muscles, etc., can be described. Surface rendering is weak in these respects, but the speed of surface rendering is faster than that of volume data, and therefore, in order to increase the imaging speed, a three-dimensional model of a lesion may be generated using a surface rendering method.
Since the three-dimensional model of the focus is reconstructed according to the focus area in the preoperative three-dimensional image, the space coordinate system of the three-dimensional model of the original focus is the three-dimensional image coordinate system in which the preoperative three-dimensional image is positioned. If the three-dimensional model of the focus is to be generated in the ultrasonic image space where the two-dimensional ultrasonic image is located, after the three-dimensional model of the focus is generated in the three-dimensional image space, the three-dimensional model of the focus can be mapped to the ultrasonic image space according to the coordinates of the three-dimensional model of the focus in the preoperative three-dimensional image and the registration result (namely the transformation relation between the three-dimensional image space and the ultrasonic image space) of the preoperative three-dimensional image and the two-dimensional ultrasonic image. The position relation between the reconstructed three-dimensional model of the focus and the real-time two-dimensional ultrasonic image can reflect the position, the size, the geometric shape and the relation between the focus and surrounding tissues.
Methods of generating simulated ablation lesions also include surface or volume rendering methods. The three-dimensional model of the simulated ablation focus is generated according to the simulated ablation parameters, and is used for imaging the ablation effect under the current simulated ablation parameters to the user, so that the user can conveniently determine whether to adopt the current simulated ablation parameters to perform actual ablation. The three-dimensional model simulating the ablation focus may be a single needle ablation model or a multi-needle combination ablation model. For example, when the shape of the simulated ablation focus is an ellipsoid, the single needle ablation model is a single ellipsoid and the multiple needle combination ablation model includes multiple ellipsoids.
In one embodiment, generating a three-dimensional model of a simulated ablation focus from simulated ablation parameters includes: and obtaining the position of the simulated ablation stove in the ultrasonic image space according to the angle of the puncture frame arranged on the ultrasonic probe and the simulated ablation depth. Further, the position of the simulated ablation focus in the three-dimensional image space can be obtained according to the position of the simulated ablation focus in the ultrasonic image space and the corresponding relation between the ultrasonic image space and the three-dimensional image space.
Specifically, the method for obtaining the position of the simulated ablation focus in the ultrasonic image space according to the angle of a puncture frame arranged on an ultrasonic probe and the simulated ablation depth comprises the following steps: determining the position of the center point of the simulated ablation range in the ultrasonic image space according to the angle of the puncture frame and the simulated ablation depth; and drawing a three-dimensional model of the simulated ablation focus in the ultrasonic image space according to the position of the center point of the simulated ablation focus in the ultrasonic image space and the size of the simulated ablation focus. In addition, a three-dimensional model of the simulated ablation focus can be drawn in the three-dimensional image space according to the position of the simulated ablation focus in the three-dimensional image space and the size of the simulated ablation focus. When the three-dimensional model of the simulated ablation focus is a multi-needle combined ablation model, the center coordinates of the simulated ablation focus can be set for each ablation model.
In addition to simulating the center coordinates of the ablation focus, the ultrasonic imaging system also needs to acquire some ablation parameters set by an operator, for example, the operator also needs to set the power and the continuous ablation time of the ablation needle, and the size of the ablation range of the ablation needle is acquired, so as to ensure that the ablation region contains a three-dimensional model of the whole focus and a safety boundary thereof. The safety boundary is that in the ablation process, a simulated ablation stove is generally required to cover the edge of a focus to be outwards expanded by a certain distance so as to ensure complete ablation of the whole focus.
In this step, the operator can input a given power for the ablation procedure and the duration of the ablation, and obtain the extent of the ablation zone according to the above-mentioned operating parameters. The operator can also set the range of the required ablation area, namely preset ablation range, and select corresponding given power, ablation duration and other working parameters according to the set ablation area range. Since the ablation range of the ablation needle is generally an ellipsoid, in this step, when the range of the ablation region is set, only the major axis length and the minor axis length of the ellipsoid need be set. It should be noted that the shape of the simulated ablation focus is not limited to an ellipsoid, but may also include a sphere or a cylinder, and an operator may set the shape of the simulated ablation focus according to the shape of the focus and set different parameters according to the shape of the simulated ablation focus.
Assuming that the shape of the simulated ablation stove is ellipsoidal, an operator can set the long diameter, the short diameter, the needle point distance, the path depth and the like of the simulated ablation stove according to the type of an actually used ablation needle, and an ultrasonic imaging system performs surface drawing on the simulated ablation stove at a coordinate origin by using the parameters, wherein the needle point distance is the distance from a heat source of an ablation needle to the needle point, and the center point of the simulated ablation stove is the position where the needle point distance is subtracted from the path depth of the penetration of the ablation needle. If the puncture of the ablation needle is performed based on the puncture frame arranged on the ultrasonic probe, the puncture frame angle beta needs to be set firstly, and the position T of the simulated ablation stove in the current ultrasonic sector is calculated by utilizing the set puncture frame angle beta and the path depth d us_ablate I.e. (x) us_ablate ,y us_ablate ) Wherein:
x usablate =d.sin β (equation 4)
y usablate =d·cos β (equation 5)
In another embodiment, the position of the simulated ablation focus may be determined according to a second spatial positioning device disposed on the ablation needle, and a three-dimensional model of the simulated ablation focus may be generated according to the position of the simulated ablation focus. Wherein, confirm the position of simulation ablation kitchen according to setting up the second space positioner on the ablation needle, include: obtaining the position of the simulated ablation stove in the space of the second space positioning device according to the second space positioning device; determining the position of the simulated ablation stove in the three-dimensional image space according to the corresponding relation between the space of the second space positioning device and the world coordinate space and the corresponding relation between the world coordinate space and the three-dimensional image space; and determining the position of the simulated ablation stove in the ultrasonic image space according to the corresponding relation between the space of the second space positioning device and the world coordinate space and the corresponding relation between the world coordinate space and the ultrasonic image space.
Specifically, when the operator performs puncture based on a second spatial positioning device (for example Vtrax) installed on the ablation needle, the angle and depth of the simulated ablation focus are obtained based on the coordinate change of the second spatial positioning device, that is, the coordinate of the simulated ablation focus in the space of the second spatial positioning device is determined according to the second spatial positioning device, the transformation matrix from the space of the second spatial positioning device to the ultrasonic image space is converted into the ultrasonic image space, and finally the coordinate of the simulated ablation focus is mapped into the three-dimensional image space according to the registration matrix of the ultrasonic image space and the three-dimensional image space, so that the three-dimensional visualization of the simulated ablation focus in different image spaces is realized. Let the coordinate of the simulated ablation focus in the ultrasound image space be T usablate In the three-dimensional image space, the coordinates are:
T sec_ablate =M·T us_ablate =P·R probe ·A·T us_ablate (equation 6)
When the puncture frame is arranged on the ultrasonic probe, an operator moves the ultrasonic probe provided with the positioning sensor, and the position of the simulated ablation focus in the three-dimensional image space can be along with R in the mapping relation matrix probe Is moved by a change in (a).As an example, if the operator is satisfied with the current location of the simulated ablation focus, the operator may click to save, that is, consider that the current location of the simulated ablation focus is ablated, and then may control the ablation needle to ablate the focus at the current location when receiving the confirmation instruction from the user.
In step S240, the two-dimensional ultrasound image, and the three-dimensional model simulating the ablation focus and the three-dimensional model simulating the focus drawn in the ultrasound image space are displayed in the first display window, and the display angle of the two-dimensional ultrasound image in the first display window is fixed.
Displayed in the first display window is an image within the ultrasound image space. In the ultrasonic image space, the origin of coordinates is fixed relative to the ultrasonic sector (generally positioned at the upper left corner of the ultrasonic sector), when the ultrasonic probe moves, the position of the ultrasonic sector is fixed, and only the two-dimensional ultrasonic image content changes along with the movement of the ultrasonic probe, namely, the view angle of image rendering is always perpendicular to the scanning plane of the ultrasonic probe. For example, when performing simulated ablation based on an ablation needle, since the position of the ablation needle is fixed relative to the ultrasound probe, i.e. the position of the simulated ablation focus is fixed relative to the ultrasound probe, the position of the simulated ablation focus is also fixed when the ultrasound probe is moved. In contrast, since the relative positional relationship between the three-dimensional model of the lesion and the lesion image in the two-dimensional ultrasound image is fixed, the position or angle of the lesion in the two-dimensional ultrasound image changes with the movement of the ultrasound probe, and thus the position of the three-dimensional model of the lesion superimposed and displayed at the lesion position also changes in real time with the movement of the ultrasound probe, i.e., T (us_probe) Fixed and unchanged, T (us_tumor) And (5) changing in real time. In summary, in the first display window, as the ultrasound probe moves, the display angle of the two-dimensional ultrasound image is fixed, the image content of the two-dimensional ultrasound image changes, and the position or angle of the three-dimensional model of the lesion changes. Based on the image displayed in the first display window, the doctor can clearly observe the content of the image displayed on the current ultrasonic sector.
In step S240, a two-dimensional ultrasound image is mapped from an ultrasound image space to the three-dimensional image space according to the registration result, and a three-dimensional model of the simulated ablation focus and a three-dimensional model of the focus drawn in the three-dimensional image space, and the two-dimensional ultrasound image mapped to the three-dimensional image space are displayed in a second display window, the positions of the three-dimensional model of the focus in the second display window being fixed. The second display window and the first display window are displayed on the same display interface.
Displayed in the second display window is an image within the three-dimensional image space. In the three-dimensional image space, the origin of coordinates is fixed based on the preoperative three-dimensional image, and therefore, the position of the three-dimensional model of the lesion generated based on the lesion segmented from the preoperative three-dimensional image is fixed. When the ultrasonic probe moves or rotates, the image content in the two-dimensional ultrasonic image changes, and the relative position relation between the three-dimensional model of the focus and the focus image in the ultrasonic image is fixed, so that the positions of the ultrasonic sector and the three-dimensional model of the simulated ablation focus change in real time along with the movement of the ultrasonic probe, namely T (us_tumor) Fixed and unchanged, T (us_probe) And (5) changing in real time. Based on the images displayed in the second display window, a doctor can view the three-dimensional models of the focuses at different angles, and observe the situation that the three-dimensional models of the focuses at different angles are covered by the three-dimensional models of the ablation focus, even if the simulated ablation parameters are adjusted.
Referring to fig. 4, a display interface of one embodiment of the present application is shown. A first display window 401 and a second display window 402 are displayed on the display interface, wherein a two-dimensional ultrasonic image 403, a three-dimensional model 405 of a focus in an ultrasonic image space and a three-dimensional model 404 simulating an ablation focus are displayed in the first display window 401, and a two-dimensional ultrasonic image 403, a three-dimensional model 405 of a focus in a three-dimensional space and a three-dimensional model 404 simulating an ablation focus are displayed in the second display window 403. In the first display window 401 and the second display window 402, the relative positional relationship of the ultrasound sector of the two-dimensional ultrasound image 403, the three-dimensional model 405 of the lesion, and the three-dimensional model 404 of the simulated ablation focus is identical, except for the angle of observation. The viewing angle of the first display window 401 is always perpendicular to the ultrasound sector, so that the image content of the two-dimensional ultrasound image can always be clearly seen through the first display window 401. The observation angle of the second display window 402 is always opposite to the three-dimensional model 405 of the focus, and when the ultrasonic probe moves, the three-dimensional model 404 of the ultrasonic sector ablation focus moves along with the ultrasonic probe, so that a doctor can observe the relative position relationship between the ultrasonic sector and the three-dimensional model 404 of the focus under the condition that the position of the three-dimensional model 405 of the focus is unchanged through the second display window 402.
Further, images of the ultrasonic probe and images of the simulated ablation needle connected to the simulated ablation focus may also be displayed in the first display window 401 and the second display window 402 so as to more intuitively display the relative positional relationship between the three-dimensional model of the simulated ablation focus and the ultrasonic probe.
In summary, the method 200 for displaying simulated ablation according to the embodiment of the present application draws the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus in the ultrasonic image space and the three-dimensional image space, respectively, so that a doctor can see the positional relationship between the ultrasonic sector and the simulated ablation focus relative to the focus in the three-dimensional image space, i.e. under the condition that the position of the three-dimensional model of the focus is unchanged; but also can see the displayed image content of the current ultrasonic sector when the display angle of the ultrasonic image space, namely the two-dimensional ultrasonic image is fixed.
Next, a display method of simulated ablation according to another embodiment of the present application will be described with reference to fig. 5. Fig. 5 is a schematic flow chart of a display method 500 of simulated ablation in an embodiment of the present application. As shown in fig. 5, a display method 500 of simulated ablation according to an embodiment of the present application includes the following steps:
in step S510, acquiring a two-dimensional ultrasonic image of a lesion in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the lesion to obtain a registration result;
In step S520, according to the registration result, generating a three-dimensional model simulating an ablation focus and a three-dimensional model simulating a focus in an ultrasound image space where the two-dimensional ultrasound image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model simulating the ablation focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters;
in step S530, displaying the two-dimensional ultrasound image, and the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasound image space in a first display window, wherein the display angle of the two-dimensional ultrasound image in the first display window is fixed;
in step S540, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space are displayed in a second display window, and the positions of the three-dimensional models of the focus in the second display window are fixed.
The method 500 of displaying simulated ablation of an embodiment of the present application is substantially similar to the method 200 of displaying simulated ablation above. The difference is that the method 500 for displaying simulated ablation does not limit the second display window and only needs to display the three-dimensional model of the focus and the three-dimensional model of the simulated ablation focus generated in the three-dimensional image space, wherein the position of the three-dimensional model of the focus is fixed, and the position of the three-dimensional model of the simulated ablation focus moves along with the movement of the ultrasonic probe. The user can check the image content of the two-dimensional ultrasonic image according to the first display window, and check the coverage condition of the three-dimensional model of the ablation focus on the three-dimensional model of the focus in the moving process of the ultrasonic probe according to the second display window. Further, an image of the ultrasound probe and an image of the simulated ablation needle may also be displayed in the second display window so as to more intuitively display the relative positional relationship between the three-dimensional model of the simulated ablation focus and the ultrasound probe.
Additional details of the method 500 for displaying simulated ablation may be referred to the related descriptions in the method 200 for displaying simulated ablation, and will not be described in detail herein.
Next, a display method of simulated ablation according to another embodiment of the present application will be described with reference to fig. 6. Fig. 6 is a schematic flow chart of a display method 600 of simulated ablation in an embodiment of the present application. As shown in fig. 6, a display method 600 of simulated ablation according to an embodiment of the present application includes the following steps:
in step S610, acquiring a two-dimensional ultrasonic image of a lesion in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the lesion to obtain a registration result;
in step S620, according to the registration result, generating at least a three-dimensional model simulating an ablation focus in an ultrasound image space where the two-dimensional ultrasound image is located and at least a three-dimensional model simulating a focus in a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model simulating the ablation focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulated ablation parameters;
in step S630, displaying the two-dimensional ultrasound image and the three-dimensional model of the simulated ablation focus generated in the ultrasound image space in a first display window, wherein the display angle of the two-dimensional ultrasound image in the first display window is fixed;
In step S640, the three-dimensional model of the lesion generated in the three-dimensional image space is displayed in a second display window, the position of the three-dimensional model of the lesion in the second display window being fixed.
Similar to the above simulated ablation display method 200, the simulated ablation display method 600 of the embodiment of the present application generates a three-dimensional model of a simulated ablation focus in an ultrasound image space and a three-dimensional model of a focus in a three-dimensional image space, respectively. The difference is that the method 600 of displaying simulated ablation is not limited to generating a three-dimensional model of a lesion in ultrasound image space, nor to generating a three-dimensional model of a simulated ablation lesion in three-dimensional image space. Illustratively, a two-dimensional ultrasound image and a three-dimensional model simulating an ablation focus may be displayed in the first display window, without displaying the three-dimensional model of the focus; the second display window may display a three-dimensional model of the lesion, but not a three-dimensional model simulating an ablation lesion.
In some embodiments, a two-dimensional ultrasound image may also be displayed in the second display window, and when the user moves the ultrasound probe, the display angle of the two-dimensional ultrasound image in the first display window is fixed, the image content changes, and simultaneously the position of the three-dimensional model of the focus in the second display window is fixed, and the display angle of the two-dimensional ultrasound image changes.
The method 600 for displaying simulated ablation in the embodiment of the application respectively generates a three-dimensional model of a simulated ablation focus in an ultrasonic image space and generates a three-dimensional model of a focus in the three-dimensional image space, so that a doctor can see the three-dimensional model of the simulated ablation focus and the image content displayed by an ultrasonic image; but also can see the three-dimensional model of the focus. Additional details of the method 600 for displaying simulated ablation may be referred to the related descriptions in the method 200 for displaying simulated ablation, and will not be described in detail herein.
Next, a display method of simulated ablation according to another embodiment of the present application will be described with reference to fig. 7. Fig. 7 is a schematic flow chart of a display method 700 of simulated ablation in an embodiment of the present application. As shown in fig. 7, a display method 700 of simulated ablation according to an embodiment of the present application includes the following steps:
in step S710, acquiring a two-dimensional ultrasound image of a lesion in real time by an ultrasound probe, and registering the two-dimensional ultrasound image with a preoperative three-dimensional image of the lesion to obtain a registration result;
in step S720, according to the registration result, generating a three-dimensional model simulating an ablation focus and a three-dimensional model simulating a focus in an ultrasound image space where the two-dimensional ultrasound image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model simulating the ablation focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters;
In step S730, the two-dimensional ultrasound image, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasound image space, and the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space are displayed in a first display window, and the display angle of the two-dimensional ultrasound image in the first display window is fixed.
Similar to the above method 200 for displaying simulated ablation, the method 700 for displaying simulated ablation according to the embodiment of the present application generates a three-dimensional model simulating an ablation focus and a three-dimensional model simulating a focus in both an ultrasound image space in which a two-dimensional ultrasound image is located and a three-dimensional image space in which a preoperative three-dimensional image is located. The difference is that in the method 700 for displaying simulated ablation, the three-dimensional model of simulated ablation focus and the three-dimensional model of focus generated in the ultrasonic image space and the three-dimensional model of simulated ablation focus and the three-dimensional model of focus generated in the three-dimensional image space are displayed in the first display window, and the user can view the three-dimensional model of simulated ablation focus and the three-dimensional model of focus drawn in different image spaces according to the first display window. Additional details of the method 700 for displaying simulated ablation may be referred to the related description of the method 200 for displaying simulated ablation, and will not be described in detail herein.
Another aspect of the embodiments of the present application provides an ultrasound imaging system for implementing the above-mentioned method 200 for displaying simulated ablation or the method 500 for displaying simulated ablation. The ultrasound imaging system includes an ultrasound probe, a transmit circuit, a receive circuit, a processor, and a display. Referring back to fig. 1, the ultrasound imaging system may be implemented as the ultrasound imaging system 100 shown in fig. 1, the ultrasound imaging system 100 may include an ultrasound probe 110, a transmitting circuit 112, a receiving circuit 114, a processor 116, and a display 118, and optionally, the ultrasound imaging system 100 may further include a transmit/receive selection switch 120 and a beam forming module 122, where the transmitting circuit 112 and the receiving circuit 114 may be connected to the ultrasound probe 110 through the transmit/receive selection switch 120, and the related descriptions of the respective components may be referred to the related descriptions above and are not repeated herein.
The ultrasonic imaging system of the embodiment of the application respectively draws the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus in the ultrasonic image space and the three-dimensional image space, so that a doctor can see the position relation of the ultrasonic sector and the simulated ablation focus relative to the focus under the condition that the position of the three-dimensional image space, namely the three-dimensional model of the focus, is unchanged; but also can see the displayed image content of the current ultrasonic sector when the ultrasonic image space, i.e. the position of the ultrasonic sector is fixed.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various inventive aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the application. However, the method of this application should not be construed to reflect the following intent: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as device programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are intended to be covered by the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of displaying simulated ablation, the method comprising:
Acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result;
according to the registration result, simultaneously generating a three-dimensional model simulating an ablation focus and a three-dimensional model of a focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model of the focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters;
displaying the two-dimensional ultrasonic image, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasonic image space in a first display window, wherein the display angle of the two-dimensional ultrasonic image in the first display window is fixed;
mapping the two-dimensional ultrasonic image from the ultrasonic image space to the three-dimensional image space according to the registration result, and displaying the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space and the two-dimensional ultrasonic image mapped to the three-dimensional image space in a second display window, wherein the positions of the three-dimensional model of the focus in the second display window are fixed.
2. The method of claim 1, wherein the registration result includes a transformation relationship between the ultrasound image space and the three-dimensional image space, the ultrasound probe having a first spatial localization device, the registering the two-dimensional ultrasound image with a preoperative three-dimensional image of a lesion to obtain a registration result, comprising:
registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of a focus according to the space positioning information obtained by the first space positioning device so as to obtain a transformation relationship between the ultrasonic image space and the three-dimensional image space.
3. The method of claim 2, wherein the registering the two-dimensional ultrasound image with the preoperative three-dimensional image of the lesion based on the spatial localization information obtained by the first spatial localization device comprises:
matching the two-dimensional ultrasonic image with a two-dimensional section in the preoperative three-dimensional image to obtain a matched section of the two-dimensional ultrasonic image in the preoperative three-dimensional image;
obtaining a coordinate transformation matrix of the two-dimensional ultrasonic image and the matching tangent plane according to the coordinates of the same characteristic points in the two-dimensional ultrasonic image and the coordinates in the matching tangent plane;
Obtaining a transformation relation between the three-dimensional image space and the world coordinate space according to the coordinate transformation matrix and the space positioning information;
and obtaining the transformation relation between the ultrasonic image space and the three-dimensional image space according to the transformation relation between the three-dimensional image space and the world coordinate space.
4. A method according to claim 3, wherein said deriving a transformation relationship between the three-dimensional image space and world coordinate space from the coordinate transformation matrix and the spatial positioning information comprises:
acquiring a transformation relation between the ultrasonic image space and the space of a first space positioning device and a transformation relation between the space of the first space positioning device and the world coordinate space;
obtaining a transformation relationship between the ultrasonic image space and the world coordinate space according to the transformation relationship between the ultrasonic image space and the space of the first space positioning device and the transformation relationship between the space of the first space positioning device and the world coordinate space;
and obtaining the transformation relation between the three-dimensional image space and the world coordinate space according to the coordinate transformation matrix and the transformation relation between the ultrasonic image space and the world coordinate space.
5. The method of claim 2, wherein the pre-operative three-dimensional image comprises a pre-operative three-dimensional ultrasound image of the lesion imaged three-dimensionally with the ultrasound probe, the registering the two-dimensional ultrasound image with the pre-operative three-dimensional image of the lesion comprising:
and obtaining the transformation relation between the ultrasonic image space and the three-dimensional image space according to the positioning information acquired by the first space positioning device.
6. The method of claim 1, wherein generating a three-dimensional model of a simulated ablation focus from the simulated ablation parameters comprises:
obtaining the position of the simulated ablation focus in the ultrasonic image space according to the angle of a puncture frame arranged on the ultrasonic probe and the simulated ablation depth;
and obtaining the position of the simulated ablation focus in the three-dimensional image space according to the position of the simulated ablation focus in the ultrasonic image space and the corresponding relation between the ultrasonic image space and the three-dimensional image space.
7. The method of claim 6, wherein the obtaining the location of the simulated ablation focus in the ultrasound image space from the angle of a penetration frame disposed on the ultrasound probe and the simulated ablation depth comprises:
Determining the position of the center point of the simulated ablation range in the ultrasonic image space according to the angle of the puncture frame and the simulated ablation depth;
and generating a three-dimensional model of the simulated ablation focus in the ultrasonic image space according to the position of the central point of the simulated ablation focus in the ultrasonic image space and the size of the simulated ablation focus.
8. The method of claim 1, wherein generating a three-dimensional model of a simulated ablation focus from the simulated ablation parameters comprises:
and determining the position of the simulated ablation stove according to a second space positioning device arranged on the ablation needle, and generating a three-dimensional model of the simulated ablation stove according to the position of the simulated ablation stove.
9. The method of claim 8, wherein said determining the location of the simulated ablation focus from a second spatial positioning device disposed on the ablation needle comprises:
obtaining the position of the simulated ablation stove in the space of the second space positioning device according to the second space positioning device;
determining the position of the simulated ablation stove in the three-dimensional image space according to the corresponding relation between the space of the second space positioning device and the world coordinate space and the corresponding relation between the world coordinate space and the three-dimensional image space;
And determining the position of the simulated ablation stove in the ultrasonic image space according to the corresponding relation between the space of the second space positioning device and the world coordinate space and the corresponding relation between the world coordinate space and the ultrasonic image space.
10. The method according to claim 8 or 9, further comprising:
and when a confirmation instruction of a user is received, controlling the ablation needle to ablate the focus at the current position of the ablation needle.
11. The method as recited in claim 1, further comprising:
displaying images of the ultrasonic probe and images of the simulated ablation needle connected with the simulated ablation range in the first display window and the second display window.
12. A method of displaying simulated ablation, the method comprising:
acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result;
according to the registration result, simultaneously generating a three-dimensional model simulating an ablation focus and a three-dimensional model of a focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model of the focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters;
Displaying the two-dimensional ultrasonic image, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasonic image space in a first display window, wherein the display angle of the two-dimensional ultrasonic image in the first display window is fixed;
and displaying the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space in a second display window, wherein the three-dimensional model of the focus in the second display window is fixed in position.
13. A method of displaying simulated ablation, the method comprising:
acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result;
according to the registration result, generating at least a three-dimensional model simulating an ablation focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and generating at least a three-dimensional model simulating a focus in a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model simulating the ablation focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulated ablation parameters;
Displaying the two-dimensional ultrasonic image and the three-dimensional model of the simulated ablation focus generated in the ultrasonic image space in a first display window, wherein the display angle of the two-dimensional ultrasonic image in the first display window is fixed;
and displaying the three-dimensional model of the focus generated in the three-dimensional image space in a second display window, wherein the position of the three-dimensional model of the focus in the second display window is fixed.
14. A method of displaying simulated ablation, the method comprising:
acquiring a two-dimensional ultrasonic image of a focus in real time through an ultrasonic probe, and registering the two-dimensional ultrasonic image with a preoperative three-dimensional image of the focus to obtain a registration result;
according to the registration result, simultaneously generating a three-dimensional model simulating an ablation focus and a three-dimensional model of a focus in an ultrasonic image space where the two-dimensional ultrasonic image is located and a three-dimensional image space where the preoperative three-dimensional image is located, wherein the three-dimensional model of the focus is generated according to the preoperative three-dimensional image, and the three-dimensional model simulating the ablation focus is generated according to simulation ablation parameters;
displaying the two-dimensional ultrasonic image, the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the ultrasonic image space, and the three-dimensional model of the simulated ablation focus and the three-dimensional model of the focus generated in the three-dimensional image space in a first display window, wherein the display angle of the two-dimensional ultrasonic image in the first display window is fixed.
15. An ultrasound imaging system, comprising:
an ultrasonic probe;
a transmitting circuit for exciting the ultrasonic probe to transmit ultrasonic waves to a target tissue;
a receiving circuit for controlling the ultrasonic probe to receive the echo of the ultrasonic wave so as to obtain an echo signal of the ultrasonic wave;
a processor for performing the steps of the method of displaying simulated ablation of any of claims 1-14.
CN202111233973.6A 2021-10-22 2021-10-22 Display method for simulated ablation and ultrasonic imaging system Pending CN115998423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111233973.6A CN115998423A (en) 2021-10-22 2021-10-22 Display method for simulated ablation and ultrasonic imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111233973.6A CN115998423A (en) 2021-10-22 2021-10-22 Display method for simulated ablation and ultrasonic imaging system

Publications (1)

Publication Number Publication Date
CN115998423A true CN115998423A (en) 2023-04-25

Family

ID=86025408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111233973.6A Pending CN115998423A (en) 2021-10-22 2021-10-22 Display method for simulated ablation and ultrasonic imaging system

Country Status (1)

Country Link
CN (1) CN115998423A (en)

Similar Documents

Publication Publication Date Title
US20200281662A1 (en) Ultrasound system and method for planning ablation
WO2022027251A1 (en) Three-dimensional display method and ultrasonic imaging system
US10290076B2 (en) System and method for automated initialization and registration of navigation system
US9561016B2 (en) Systems and methods to identify interventional instruments
US7894663B2 (en) Method and system for multiple view volume rendering
US10499879B2 (en) Systems and methods for displaying intersections on ultrasound images
CN110087550B (en) Ultrasonic image display method, equipment and storage medium
CN106456253B (en) From the automatic multi-modal ultrasound registration of reconstruction
JP2018057428A (en) Ultrasonic diagnosis apparatus and ultrasonic diagnosis support program
US11712224B2 (en) Method and systems for context awareness enabled ultrasound scanning
CN107106128B (en) Ultrasound imaging apparatus and method for segmenting an anatomical target
JP7171168B2 (en) Medical image diagnosis device and medical image processing device
CN111836584B (en) Ultrasound contrast imaging method, ultrasound imaging apparatus, and storage medium
KR20170086311A (en) Medical imaging apparatus and operating method for the same
CN115317128A (en) Ablation simulation method and device
US7376254B2 (en) Method for surface-contouring of a three-dimensional image
CN103919571B (en) Ultrasound Image Segmentation
CN112043377B (en) Ultrasound visual field simulation auxiliary ablation path planning method and system for any section of CT
CN115998334A (en) Ablation effect display method and ultrasonic imaging system
CN115998423A (en) Display method for simulated ablation and ultrasonic imaging system
CN115530973A (en) Ablation visualization method and system
CN115998335A (en) Ablation effect display method and ultrasonic imaging system
Zenbutsu et al. 3D ultrasound assisted laparoscopic liver surgery by visualization of blood vessels
US20230181165A1 (en) System and methods for image fusion
JP7299100B2 (en) ULTRASOUND DIAGNOSTIC DEVICE AND ULTRASOUND IMAGE PROCESSING METHOD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination