CN117315133A - Method and system for automatic segmentation and four-dimensional modeling of intra-operative-cavity ultrasonic image - Google Patents

Method and system for automatic segmentation and four-dimensional modeling of intra-operative-cavity ultrasonic image Download PDF

Info

Publication number
CN117315133A
CN117315133A CN202311084439.2A CN202311084439A CN117315133A CN 117315133 A CN117315133 A CN 117315133A CN 202311084439 A CN202311084439 A CN 202311084439A CN 117315133 A CN117315133 A CN 117315133A
Authority
CN
China
Prior art keywords
dimensional
ultrasonic image
heart
image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311084439.2A
Other languages
Chinese (zh)
Inventor
周欣欢
曹琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinhuan Technology Co ltd
Original Assignee
Shenzhen Xinhuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinhuan Technology Co ltd filed Critical Shenzhen Xinhuan Technology Co ltd
Priority to CN202311084439.2A priority Critical patent/CN117315133A/en
Publication of CN117315133A publication Critical patent/CN117315133A/en
Pending legal-status Critical Current

Links

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a method and a system for automatic segmentation and four-dimensional modeling of an ultrasonic image in a surgical center cavity, wherein the method comprises the following steps: acquiring an original heart ultrasonic image before operation and preprocessing the original heart ultrasonic image to generate an ultrasonic image sample set; constructing a convolutional neural network based on an ultrasonic image sample set and training to obtain segmentation models of different tissues of a heart image; acquiring an intra-cardiac-cavity ultrasonic image in real time during operation, inputting a trained neural network model for image segmentation, and obtaining a cardiac ultrasonic image segmentation result; and carrying out three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the segmented images according to the acquisition time of each ultrasonic image corresponding to respiration and electrocardio gating and the spatial position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model. The invention can accurately divide, mark and three-dimensional model each structure in the heart cavity image at different moments in real time, has more accurate division, can obviously reduce the time and manpower in operation and improve the safety of operation.

Description

Method and system for automatic segmentation and four-dimensional modeling of intra-operative-cavity ultrasonic image
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for automatic segmentation and four-dimensional modeling of an ultrasonic image in a surgical center cavity.
Background
Heart disease is an important health problem, with about 40% of world deaths each year being caused by cardiovascular disease, and most cardiovascular disease is closely related to abnormal operation of the heart. Cardiac intervention is a current clinical first-line arrhythmia treatment means, and the purpose of minimally invasive treatment is achieved by introducing a catheter into the heart through a blood vessel to perform various operations. In many cardiac interventional procedures, three-dimensional modeling of the heart and identification of different cardiac tissue is required. For example, in arrhythmia surgery, intra-operative ultrasound is required to perform cardiac ultrasound image acquisition, quickly segment and identify different tissues on the image, and perform three-dimensional modeling (such as left atrial appendage, left atrium, right atrium, etc.) according to the spatial positions of the different tissues, so as to guide a complicated therapeutic operation.
The current clinical heart ultrasonic image recognition and segmentation are generally manual, and often a doctor is required to manually segment hundreds of images for three-dimensional modeling and recognition, and finally, the three-dimensional structure of the heart is reconstructed, so that huge occupation of clinical resources is caused, and as a result, deviation can occur due to subjective difference of the doctor, and meanwhile, the operation time is longer and the complication risk is higher.
In the heart chamber ultrasonic image segmentation and reconstruction work, except for manual segmentation of images by a professional doctor, the existing computer-aided segmentation is mostly based on some traditional image processing technologies, but because the information in the ultrasonic image is complex, the gray level distribution is uneven, the noise is large, and the organ tissues are easy to deform, the factors increase the difficulty of feature selection and image segmentation, so that the segmentation process is long in time consumption and low in accuracy. The other main methods are all applied in the images acquired by the TTE or TEE and are for segmentation of a single anatomical structure, such as the left ventricle, the end result being only a delineation of the endocardial contour, since only the blood tissue and endocardial structure need to be distinguished.
In the prior art, in the heart chamber ultrasonic image segmentation and reconstruction work, the segmentation process of the existing computer-aided segmentation method is long in time consumption and low in accuracy.
The prior art is therefore still in need of further development.
Disclosure of Invention
Aiming at the technical problems, the embodiment of the invention provides a method and a system for automatic segmentation and four-dimensional modeling of an ultrasonic image in a surgical center cavity, which can solve the technical problems of long time consumption and lower accuracy in the segmentation process of the traditional computer-aided segmentation method in the segmentation and reconstruction work of the ultrasonic image in the center cavity in the prior art.
A first aspect of an embodiment of the present invention provides a method for automatic segmentation and four-dimensional modeling of an intra-operative central lumen ultrasound image, the method comprising:
acquiring an original heart ultrasonic image before operation, preprocessing the original heart ultrasonic image, and generating an ultrasonic image sample set;
constructing a deep learning convolutional neural network, and training the convolutional neural network on the computing architecture based on the ultrasonic image sample set to obtain segmentation models of different tissues of a heart image;
acquiring a heart ultrasonic image in real time by using intra-cardiac cavity ultrasonic in an operation, inputting the intra-cardiac ultrasonic image into the image segmentation model, and obtaining segmentation results of different tissues of the heart ultrasonic image in real time;
and carrying out three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result according to the time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model.
Optionally, the three-dimensional coordinate transformation, three-dimensional modeling and time synchronization are performed on the cardiac ultrasound image segmentation result according to the time of acquisition of each ultrasound image corresponding to respiration and electrocardiograph gating and the three-dimensional space position of each ultrasound image acquired by the three-dimensional positioning sensor, and before the four-dimensional cardiac model is obtained, the method comprises the following steps:
acquiring bioelectricity analysis electrocardiosignals and respiration electric signals recorded by a gating system, and acquiring the acquisition time of each ultrasonic image corresponding to respiration and electrocardio gating;
based on a three-dimensional positioning sensor integrated on an intracardiac ultrasonic probe and an extracorporeal three-dimensional positioning system, the three-dimensional space position acquired by each ultrasonic image is recorded.
Optionally, the three-dimensional coordinate transformation, three-dimensional modeling and time synchronization are performed on the cardiac ultrasound image segmentation result according to the time of acquisition of each ultrasound image corresponding to respiration and electrocardiograph gating and the three-dimensional space position of each ultrasound image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model, which comprises:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out three-dimensional space coordinate transformation on the image grid points by adopting a coordinate transformation matrix, and mapping the two-dimensional image coordinates into a three-dimensional space;
and carrying out three-dimensional modeling on images acquired by gating electrocardio and respiration at the same time and after segmentation at different positions based on a poisson reconstruction algorithm to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different times.
Optionally, according to the time of acquiring each ultrasonic image corresponding to respiration and electrocardiograph gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, performing three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result to obtain a four-dimensional heart model, including:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out coordinate transformation on the grid points of the images, and mapping the coordinates of the two-dimensional images into a three-dimensional space;
and carrying out three-dimensional modeling on images acquired by electrocardiograph and respiration gating at the same moment and after segmentation at different positions to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different moments.
Optionally, the acquiring electrocardiographic and respiratory electrical signals recorded by electrocardiographic gating includes:
acquiring a body surface electric signal in the operation based on the body surface electrode;
after Fourier transformation is carried out on the body surface electric signals, periodic high-frequency signals and low-frequency signals are obtained; the high frequency signal is denoted as an electrocardiograph signal and the low frequency signal is denoted as a respiratory signal.
Optionally, acquiring a time of acquisition of each ultrasound image corresponding to respiration and electrocardiographic gating includes:
corresponding moments of each ultrasound image in each heartbeat signal and each respiration signal are recorded.
A second aspect of an embodiment of the present invention provides a system for automatic segmentation and four-dimensional modeling of an intra-operative central lumen ultrasound image, the system comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of:
acquiring an original heart ultrasonic image before operation, preprocessing the original heart ultrasonic image, and generating an ultrasonic image sample set;
constructing a deep learning convolutional neural network, and training the convolutional neural network on the computing architecture based on the ultrasonic image sample set to obtain segmentation models of different tissues of a heart image;
acquiring a heart ultrasonic image in real time by using intra-cardiac cavity ultrasonic in an operation, inputting the intra-cardiac ultrasonic image into the image segmentation model, and obtaining segmentation results of different tissues of the heart ultrasonic image in real time;
and carrying out three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result according to the time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model.
Optionally, the computer program when executed by the processor implements the steps of:
acquiring bioelectricity analysis electrocardiosignals and respiration electric signals recorded by a gating system, and acquiring time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating;
based on a three-dimensional positioning sensor integrated on an intracardiac ultrasonic probe and an extracorporeal three-dimensional positioning system, the three-dimensional space position acquired by each ultrasonic image is recorded.
Optionally, the computer program when executed by the processor further implements the steps of:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out coordinate transformation on the grid points of the images, and mapping the coordinates of the two-dimensional images into a three-dimensional space;
and carrying out three-dimensional modeling on images acquired by electrocardiograph and respiration gating at the same moment and after segmentation at different positions to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different moments.
A third aspect of the embodiments of the present invention provides a non-volatile computer-readable storage medium, where the non-volatile computer-readable storage medium stores computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the method for automatic segmentation and four-dimensional modeling of intra-operative ultrasound images described above.
In the technical scheme provided by the embodiment of the invention, a heart ultrasonic original image is collected before operation, and the ultrasonic original image is preprocessed to generate an ultrasonic image sample set; constructing a deep learning convolutional neural network, and training the convolutional neural network on the computing architecture based on the ultrasonic image sample set to obtain segmentation models of different tissues of a heart image; acquiring a heart ultrasonic image in real time by using intra-cardiac cavity ultrasonic in an operation, inputting the intra-cardiac ultrasonic image into the image segmentation model, and obtaining segmentation results of different tissues of the heart ultrasonic image in real time; and carrying out three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result according to the time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model. According to the embodiment of the invention, an artificial intelligence-based intracardiac ultrasonic image segmentation method can be used for accurately segmenting, labeling and three-dimensional modeling of each heart structure in an ultrasonic image at different moments, rather than segmenting blood tissue and endocardial structures, the segmentation is more accurate, the time and manpower in the operation can be obviously reduced, and the safety of the operation can be improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a method for automatic segmentation and four-dimensional modeling of ultrasound images in a surgical center lumen in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network model of a convolutional neural network of a preferred embodiment of a method for automatic segmentation and four-dimensional modeling of intra-operative ultrasound images in a central lumen in accordance with an embodiment of the present invention;
fig. 3 is a schematic hardware structure diagram of another embodiment of a system for automatic segmentation and four-dimensional modeling of an intra-operative ultrasound image in a central lumen according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Intracardiac ultrasound (Intracardiac echocardiography, ICE) is an ultrasound transducer mounted at the anterior end of an interventional catheter delivered to the heart through the peripheral vein for imaging. During interventional procedures, the ICE may provide the operator with a real-time assessment of cardiac anatomy and guide the physician in the manipulation of different anatomical levels. ICE has been used to assist in atrial septum puncture, left atrial appendage occlusion, mitral valve formation, atrial fibrillation radio frequency ablation, ventricular arrhythmias, cardiac biopsies, and the like.
Accurate extraction of heart structure information in an ultrasonic image is a key for good or bad ICE auxiliary effect. At present, in the process of intracardiac ultrasonic operation diagnosis, a professional doctor is required to manually divide the heart anatomy structure in an ultrasonic image in real time, and after each heart anatomy structure is divided, the three-dimensional heart structure is reconstructed by combining two-dimensional image information and position information given by a positioning sensor in a catheter. The whole process is time-consuming and labor-consuming, and has the problems of subjective misjudgment and the like. The traditional method for segmenting the ultrasonic image is mainly a traditional image processing method, such as a region growing method, a threshold segmentation method, wavelet transformation and the like, and the methods have low accuracy and poor robustness and cannot meet the requirements of clinical use. In addition, these methods can only enable identification segmentation of individual atrial or ventricular structures. They aim to distinguish blood tissue from endocardial structures, which are relatively easy because of their significant differences in appearance across the image, but this degree of segmentation has limited clinical significance.
In the heart chamber ultrasonic image segmentation and reconstruction work, except for manual segmentation of images by a professional doctor, the existing computer-aided segmentation is mostly based on some traditional image processing technologies, but because the information in the ultrasonic image is complex, the gray level distribution is uneven, the noise is large, and the organ tissues are easy to deform, the factors increase the difficulty of feature selection and image segmentation, so that the segmentation process is long in time consumption and low in accuracy. The other main methods are all applied in the images acquired by the TTE or TEE and are for segmentation of a single anatomical structure, such as the left ventricle, the end result being only a delineation of the endocardial contour, since only the blood tissue and endocardial structure need to be distinguished. There is no mature artificial intelligence based segmentation method applied to ICE images.
Aiming at the defects, the invention provides a method for carrying out semantic segmentation on an intracardiac ultrasonic image based on a convolutional neural network in deep learning, which can automatically, rapidly and accurately segment different anatomical structures of a heart in the ultrasonic image and assist doctors to carry out operation scheme planning. Meanwhile, based on the image segmentation result, three-dimensional modeling is performed.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a method for automatic segmentation and four-dimensional modeling of an intra-operative ultrasound image in a central lumen according to an embodiment of the present invention. As shown in fig. 1, includes:
step S100, acquiring an original heart ultrasonic image before operation, and preprocessing the original heart ultrasonic image to generate an ultrasonic image sample set;
step 200, constructing a deep learning convolutional neural network, and training the convolutional neural network on the computing architecture based on the ultrasonic image sample set to obtain segmentation models of different tissues of a heart image;
step S300, intra-cardiac-cavity ultrasound is used for acquiring cardiac ultrasound images in real time in an operation, the intra-cardiac ultrasound images are input into the image segmentation model, and segmentation results of different tissues of the cardiac ultrasound images are obtained in real time;
and step 400, carrying out three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result according to the time of acquisition of each ultrasonic image corresponding to respiration and electrocardiograph gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model.
In specific implementation, the embodiment of the invention acquires the original images in heart chambers of a certain number of different patients acquired by ICE equipment, and performs preprocessing operations such as manual segmentation labeling, enhancement, rotation, translation, scaling, mirroring and the like on the original images to generate preprocessed original data; for example, the preprocessed original data is manually segmented, different tissues such as blood, left auricle, right auricle and the like are marked manually, a region of interest of a doctor is divided, and an ultrasonic image sample set is generated; constructing a convolution network model, and training to obtain a segmentation model; inputting the two-dimensional image to be processed into the segmentation model to obtain an image segmentation result.
For example, constructing a deep learning convolutional neural network, acquiring a certain number of intracardiac ultrasonic original images by using ICE equipment, and preprocessing the acquired images; marking and dividing the preprocessed two-dimensional ultrasonic image into different structural categories, and dividing a total sample into a training sample and a test sample; constructing a deep learning convolutional neural network, training the convolutional neural network on a computing architecture (such as CPU, GPU or TPU) based on the ultrasonic image sample set, taking an image of each group of samples in the training samples as input, taking a class represented by a pixel point as a label, and training the convolutional neural network to obtain a semantic segmentation model. Wherein images acquired by ICE are acquired from clinical surgery and different tissues are manually marked as labels by a professional clinician, such as left atrial appendage, blood, esophagus, left atrium, pulmonary vein, etc. Second, the model shown in FIG. 2 is trained to maximize the deviation between the output result and the manually labeled label by optimizing the coefficients of the lower convolution and the upper convolution. The smaller the deviation, the higher the accuracy of the network model;
the obtained model is used for testing the accuracy of the sample, if the performance of the model can not meet the requirement, the performance is tested after the parameters of the neural network are repeatedly adjusted for many times until the accuracy of the model in the test sample is qualified;
and using the finally trained model for the two-dimensional ICE image to be processed, and dividing corresponding structure labels for pixel points in each image. Simultaneously, each two-dimensional ICE image is acquired and the plane position of the acquired image is recorded by a magnetic positioning sensor;
according to the position of the image acquisition plane and the relative position of the tissue on each label image, the three-dimensional space position of each pixel is obtained; and carrying out three-dimensional coordinate transformation, three-dimensional modeling (such as using a poisson reconstruction algorithm) and time synchronization on the heart ultrasonic image segmentation result according to the time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating and the three-dimensional position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model. Four dimensions refer to time + three dimensional images.
Further, based on the ICE equipment collecting the ultrasonic original image in the heart chamber, preprocessing the ultrasonic original image to generate an ultrasonic image sample set, including:
acquiring an intracardiac ultrasonic original image based on ICE equipment;
amplifying the ultrasonic original image to obtain an amplified ultrasonic original image;
and dividing the ultrasonic original image according to the region of interest, and obtaining an ultrasonic image sample set according to the divided image.
In the specific implementation, a certain number of intracardiac ultrasonic original images of different patients are acquired through ICE equipment, and the boundary between each structure is unclear due to uneven gray level distribution and large noise, so that data is required to be preprocessed. To better optimize the model, the problems of overfitting, etc. are avoided, and the data is amplified during preprocessing of the original image, including but not limited to geometric enhancement (flipping, rotation, cropping, deformation, scaling, etc.), color enhancement (noise, blurring, color transformation, erasure, filling, etc.), etc.
The preprocessed raw data is manually segmented to define a region of interest (Region of Interest, ROI) for the doctor. Typically, the ROIs in ICE images are Left Atrium (LA), left auricle (LAA), left apex of coronary artery (LCC), left Ventricle (LV), mitral Valve (MV), non-apex of coronary artery (NCC), pulmonary Artery (PA), right Atrium (RA), right auricle (RAA), right apex of coronary artery (RCC), right Ventricle (RV), right Ventricular Outflow Tract (RVOT), vena cava (SVC), tricuspid Valve (TV), and the like. The single ICE image contains only a portion of the structures described above, as structural information can be acquired at different locations of the heart on different images as the ICE catheter is moved and rotated. And dividing the divided sample set into a training sample and a test sample for training and testing the model.
Further, constructing a deep learning convolutional neural network, training the convolutional neural network on the computing architecture based on the ultrasonic image sample set to obtain an image segmentation model, including:
constructing a two-dimensional convolutional neural network;
training the two-dimensional convolutional neural network based on the ultrasonic image sample set to obtain a two-dimensional image segmentation model.
In specific implementation, the two-dimensional convolutional neural network model constructed in the invention comprises an input layer, a plurality of convolutional layers and a pooling layer, and a final output layer. The three-dimensional convolution network is used for 3D expansion of the traditional two-dimensional convolution neural network, can extract more hidden information in similar images, and is applied to three-dimensional image segmentation. The network structure is similar to two-dimensional except that the convolution is performed using a 3D convolution kernel.
After the model is built, the prepared test sample is used for testing the performance of the model.
When the parameters are optimized, the adjustable neural network parameters comprise the number of samples, the number of channels and window sizes of the convolution layers, the window sizes of the pooling layers and the like. In addition, the overall performance of the model can be changed by adjusting the learning rate, selecting different optimizers and loss functions and using different batch sizes and epoch times. And (5) adjusting the parameters to obtain a model with optimal performance, and then obtaining a final model.
Further, according to the time of acquiring each ultrasonic image corresponding to respiration and electrocardiograph gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, performing three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result, and before obtaining a four-dimensional heart model, the method comprises the following steps:
acquiring bioelectricity analysis electrocardiosignals and respiration electric signals recorded by a gating system, and acquiring time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating;
the spatial position of each ultrasonic image acquisition is recorded based on a three-dimensional positioning sensor and an external three-dimensional positioning system which are integrated on an intracardiac ultrasonic probe.
In specific implementation, the heartbeat and respiration are assumed to be the time when the periodic electrocardio gating system records bioelectrical analysis electrocardio signals and respiratory electric signals and identifies the acquisition time of ultrasonic images in one respiratory cycle and one heartbeat cycle. Simultaneously, a three-dimensional positioning sensor and an external three-dimensional positioning system which are integrated on an intracardiac ultrasonic probe record the space position of each ultrasonic image acquisition.
Further, according to the time of acquiring each ultrasonic image corresponding to respiration and electrocardiograph gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, performing three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result to obtain a four-dimensional heart model, wherein the method comprises the following steps:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out coordinate transformation on the grid points of the images, and mapping the coordinates of the two-dimensional images into a three-dimensional space;
and carrying out three-dimensional modeling on images acquired by electrocardiograph and respiration gating at the same moment and after segmentation at different positions to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different moments.
Further, according to the time of acquiring each ultrasonic image corresponding to respiration and electrocardiograph gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, performing three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result to obtain a four-dimensional heart model, including:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out three-dimensional space coordinate transformation on the image grid points by adopting a coordinate transformation matrix, and mapping the two-dimensional image coordinates into a three-dimensional space;
three-dimensional modeling is carried out on images acquired by gating electrocardio and respiration at the same time and after segmentation at different positions based on a poisson reconstruction algorithm to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different times,
in the implementation, according to the three-dimensional position of the probe when each ultrasonic image is acquired, coordinate transformation such as translation and rotation is carried out on the grid points of the image, and the coordinates of the two-dimensional image are mapped into a three-dimensional space. Three-dimensional space coordinate transformation is performed by using a coordinate transformation matrix (such as a translation matrix, a rotation matrix, a mirror image matrix and the like); and carrying out three-dimensional modeling on the heart structure by using poisson reconstruction and other algorithms.
The three-dimensional modeling algorithm is specifically as follows:
by combining the three-dimensional space position and imaging angle of the ICE catheter, all points on each two-dimensional image can be subjected to space rigid transformation to obtain a point cloud obtained by segmentation. Because the ICE probe translates and rotates in the heart cavity at any time in the image acquisition plane, the following two linear space transformation relations can be established
P′=R x *R y *R z *T*P
Wherein P is E R 4-N For the enhancement matrix of all label point coordinates before the space coordinate transformation, P' E R 4-N Is an enhancement matrix of all the coordinates of the tag points after the space coordinate transformation. R is R x ∈R 4-4 ,R y ∈R 4-4 ,R z ∈R 4-4 And T.epsilon.R 4-4 A rotation matrix and a translation matrix along the x-axis, the y-axis and the z-axis, respectively. Where N represents the number of tag points that undergo spatial transformation:
wherein t is x ,t y And t z Translation of the probe along x, y and z directions, R x ,R y And R is z Rotation of the probe about the x, y and z directions, respectively. The translation distance and rotation angle can be measured by a magnetic field sensor integrated in the front end of the probe.
Based on a three-dimensional modeling algorithm such as crust based reconstruction or Poission reconstruction, a three-dimensional heart model can be obtained.
Further, acquiring electrocardiographic and respiratory electrical signals recorded by electrocardiographic gating, including:
acquiring a body surface electric signal in the operation based on the body surface electrode;
after Fourier transformation is carried out on the body surface electric signals, periodic high-frequency signals and low-frequency signals are obtained; the high frequency signal is denoted as an electrocardiograph signal and the low frequency signal is denoted as a respiratory signal.
In the specific implementation, in the operation, the body surface electrode collects body surface electric signals, and collects and Fourier transforms the collected body surface electric signals to decompose the collected body surface electric signals into high-frequency and low-frequency parts, wherein the high-frequency parts are electrocardiosignals and the low-frequency parts are respiratory signals.
And (3) automatically detecting peaks of the respiratory signals, and exploring all peaks to obtain the electrocardio R wave. .
Further, acquiring time of each ultrasound image acquisition corresponding to respiration and electrocardiographic gating includes:
corresponding moments of each ultrasound image in each heartbeat signal and each respiration signal are recorded.
In practice, it is assumed that respiration and heartbeat are periodically varying. Recording corresponding moments of each ultrasonic image in each heartbeat signal and each respiratory signal; all ultrasound images are synchronized into one breathing cycle and one heartbeat cycle. Thereby obtaining a four-dimensional heart model.
In some other embodiments, the algorithm is also used for image segmentation, three-dimensional modeling of other tissues and organs. Or using artificial intelligence to find abnormal focus targets.
The embodiment of the invention provides an automatic segmentation and four-dimensional modeling method for an intra-cardiac-cavity ultrasonic image, which utilizes an artificial-intelligence-based intra-cardiac-cavity ultrasonic image segmentation method to accurately segment and label each structure of a heart in an ultrasonic image, rather than only segment a blood tissue and an endocardial structure.
And the two-dimensional neural network structure and the three-dimensional neural network structure are utilized to automatically, real-timely and accurately divide the two-dimensional image obtained by the ICE equipment and the reconstructed three-dimensional image, so that the time and the labor are saved, and a doctor is assisted in planning an operation scheme.
And carrying out space coordinate transformation and three-dimensional modeling on the segmented image obtained by artificial intelligence by using affine transformation in combination with the probe position and the space angle acquired by the magnetic field sensor.
It should be noted that, there is not necessarily a certain sequence between the steps, and those skilled in the art will understand that, in different embodiments, the steps may be performed in different orders, that is, may be performed in parallel, may be performed interchangeably, or the like.
The method for automatic segmentation and four-dimensional modeling of intra-operative center ultrasound images in the embodiment of the present invention is described above, and the system for automatic segmentation and four-dimensional modeling of intra-operative center ultrasound images in the embodiment of the present invention is described below, referring to fig. 3, fig. 3 is a schematic hardware structure of another embodiment of a system for automatic segmentation and four-dimensional modeling of intra-operative center ultrasound images in the embodiment of the present invention, as shown in fig. 3, where the system includes: the system comprises an ultrasonic probe, a transmitting/receiving change-over switch, a pulse transmitter DAC, a transmitting beam control FPGA/DSP, an analog front end multichannel analog circuit, a receiving beam synthesis FPGA, a three-dimensional coordinate transformation CPU, a gating system, a neural network training & processing/three-dimensional reconstruction & rendering GPU and display, and further comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the computer program realizes the following steps when being executed by the processor:
acquiring an original heart ultrasonic image before operation, preprocessing the original heart ultrasonic image, and generating an ultrasonic image sample set;
constructing a deep learning convolutional neural network, and training the convolutional neural network on the computing architecture based on the ultrasonic image sample set to obtain segmentation models of different tissues of a heart image;
acquiring a heart ultrasonic image in real time by using intra-cardiac cavity ultrasonic in an operation, inputting the intra-cardiac ultrasonic image into the image segmentation model, and obtaining segmentation results of different tissues of the heart ultrasonic image in real time;
and carrying out three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result according to the time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model.
Specific implementation steps are the same as those of the method embodiment, and are not repeated here.
Optionally, the computer program when executed by the processor 101 also implements the steps of:
acquiring bioelectricity analysis electrocardiosignals and respiration electric signals recorded by a gating system, and acquiring time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating;
the spatial position of each ultrasonic image acquisition is recorded based on a three-dimensional positioning sensor and an external three-dimensional positioning system which are integrated on an intracardiac ultrasonic probe.
Specific implementation steps are the same as those of the method embodiment, and are not repeated here.
Optionally, the computer program when executed by the processor 101 also implements the steps of:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out coordinate transformation on the grid points of the images, and mapping the coordinates of the two-dimensional images into a three-dimensional space;
and carrying out three-dimensional modeling on images acquired by electrocardiograph and respiration gating at the same moment and after segmentation at different positions to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different moments.
Specific implementation steps are the same as those of the method embodiment, and are not repeated here.
Optionally, the computer program when executed by the processor 101 also implements the steps of:
acquiring a body surface electric signal in the operation based on the body surface electrode;
after Fourier transformation is carried out on the body surface electric signals, periodic high-frequency signals and low-frequency signals are obtained; the high frequency signal is denoted as an electrocardiograph signal and the low frequency signal is denoted as a respiratory signal.
Specific implementation steps are the same as those of the method embodiment, and are not repeated here.
Optionally, the computer program when executed by the processor 101 also implements the steps of:
corresponding moments of each ultrasound image in each heartbeat signal and each respiration signal are recorded.
Specific implementation steps are the same as those of the method embodiment, and are not repeated here.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions for execution by one or more processors, e.g., to perform the method steps S100-S400 of fig. 1 described above.
By way of example, nonvolatile storage media can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM may be available in many forms such as Synchronous RAM (SRAM), dynamic RAM, (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The disclosed memory components or memories of the operating environment described in embodiments of the present invention are intended to comprise one or more of these and/or any other suitable types of memory.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for automatic segmentation and four-dimensional modeling of an intra-operative central cavity ultrasound image, the method comprising:
acquiring an original heart ultrasonic image before operation, preprocessing the original heart ultrasonic image, and generating an ultrasonic image sample set;
constructing a deep learning convolutional neural network, and training the convolutional neural network on the computing architecture based on the ultrasonic image sample set to obtain segmentation models of different tissues of a heart image;
acquiring a heart ultrasonic image in real time by using intra-cardiac cavity ultrasonic in an operation, inputting the intra-cardiac ultrasonic image into the image segmentation model, and obtaining segmentation results of different tissues of the heart ultrasonic image in real time;
and carrying out three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result according to the time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model.
2. The method for automatic segmentation and four-dimensional modeling of intra-operative central lumen ultrasound images according to claim 1, wherein the steps of performing three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasound image segmentation result according to the time of acquisition of each ultrasound image corresponding to respiration and electrocardiography gating and the three-dimensional space position of each ultrasound image acquired by a three-dimensional positioning sensor, and before obtaining a four-dimensional heart model, comprise:
acquiring bioelectricity analysis electrocardiosignals and respiration electric signals recorded by a gating system, and acquiring time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating;
the spatial position of each ultrasonic image acquisition is recorded based on a three-dimensional positioning sensor and an external three-dimensional positioning system which are integrated on an intracardiac ultrasonic probe.
3. The method for automatic segmentation and four-dimensional modeling of ultrasound images in a surgical center according to claim 2, wherein the obtaining a four-dimensional heart model according to the time of acquisition of each ultrasound image corresponding to respiration and electrocardiography gating and the three-dimensional spatial position of each ultrasound image acquired by a three-dimensional positioning sensor by performing three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasound image segmentation result comprises:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out coordinate transformation on the grid points of the images, and mapping the coordinates of the two-dimensional images into a three-dimensional space;
and carrying out three-dimensional modeling on images acquired by electrocardiograph and respiration gating at the same moment and after segmentation at different positions to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different moments.
4. The method for automatic segmentation and four-dimensional modeling of ultrasound images in a surgical center according to claim 2, wherein the obtaining a four-dimensional heart model according to the time of acquisition of each ultrasound image corresponding to respiration and electrocardiography gating and the three-dimensional spatial position of each ultrasound image acquired by a three-dimensional positioning sensor by performing three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasound image segmentation result comprises:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out three-dimensional space coordinate transformation on the image grid points by adopting a coordinate transformation matrix, and mapping the two-dimensional image coordinates into a three-dimensional space;
and carrying out three-dimensional modeling on images acquired by gating electrocardio and respiration at the same time and after segmentation at different positions based on a poisson reconstruction algorithm to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different times.
5. The method of automatic segmentation and four-dimensional modeling of intra-operative central lumen ultrasound images according to claim 3 or 4, wherein the acquiring of electrocardiographic and respiratory electrical signals of an electrocardiographic gating record comprises:
acquiring a body surface electric signal in the operation based on the body surface electrode;
after Fourier transformation is carried out on the body surface electric signals, periodic high-frequency signals and low-frequency signals are obtained; the high frequency signal is denoted as an electrocardiograph signal and the low frequency signal is denoted as a respiratory signal.
6. The method of automatic segmentation and four-dimensional modeling of intra-operative central lumen ultrasound images according to claim 5, wherein the time of acquisition of each ultrasound image corresponding to respiration and electrocardiography gating comprises:
corresponding moments of each ultrasound image in each heartbeat signal and each respiration signal are recorded.
7. A system for automatic segmentation and four-dimensional modeling of intra-operative ultrasound images in a central lumen, the system comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of:
acquiring an original heart ultrasonic image before operation, preprocessing the original heart ultrasonic image, and generating an ultrasonic image sample set;
constructing a deep learning convolutional neural network, and training the convolutional neural network on the computing architecture based on the ultrasonic image sample set to obtain segmentation models of different tissues of a heart image;
acquiring a heart ultrasonic image in real time by using intra-cardiac cavity ultrasonic in an operation, inputting the intra-cardiac ultrasonic image into the image segmentation model, and obtaining segmentation results of different tissues of the heart ultrasonic image in real time;
and carrying out three-dimensional coordinate transformation, three-dimensional modeling and time synchronization on the heart ultrasonic image segmentation result according to the time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating and the three-dimensional space position of each ultrasonic image acquired by a three-dimensional positioning sensor, so as to obtain a four-dimensional heart model.
8. The system for automatic segmentation and four-dimensional modeling of intra-operative central lumen ultrasound images according to claim 7, wherein the computer program when executed by the processor performs the steps of:
acquiring bioelectricity analysis electrocardiosignals and respiration electric signals recorded by a gating system, and acquiring time of acquisition of each ultrasonic image corresponding to respiration and electrocardio gating;
the spatial position of each ultrasonic image acquisition is recorded based on a three-dimensional positioning sensor and an external three-dimensional positioning system which are integrated on an intracardiac ultrasonic probe.
9. The system for automatic segmentation and four-dimensional modeling of intra-operative central lumen ultrasound images according to claim 8, wherein the computer program when executed by the processor further performs the steps of:
according to the three-dimensional position of the probe when each ultrasonic image is acquired, carrying out coordinate transformation on the grid points of the images, and mapping the coordinates of the two-dimensional images into a three-dimensional space;
and carrying out three-dimensional modeling on images acquired by electrocardiograph and respiration gating at the same moment and after segmentation at different positions to obtain a four-dimensional heart model, wherein the four-dimensional heart model refers to a three-dimensional heart model at different moments.
10. A non-transitory computer-readable storage medium storing computer-executable instructions which, when executed by one or more processors, cause the one or more processors to perform the method of intra-operative central cavity ultrasound image automatic segmentation and four-dimensional modeling of any one of claims 1-6.
CN202311084439.2A 2023-08-28 2023-08-28 Method and system for automatic segmentation and four-dimensional modeling of intra-operative-cavity ultrasonic image Pending CN117315133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311084439.2A CN117315133A (en) 2023-08-28 2023-08-28 Method and system for automatic segmentation and four-dimensional modeling of intra-operative-cavity ultrasonic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311084439.2A CN117315133A (en) 2023-08-28 2023-08-28 Method and system for automatic segmentation and four-dimensional modeling of intra-operative-cavity ultrasonic image

Publications (1)

Publication Number Publication Date
CN117315133A true CN117315133A (en) 2023-12-29

Family

ID=89245298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311084439.2A Pending CN117315133A (en) 2023-08-28 2023-08-28 Method and system for automatic segmentation and four-dimensional modeling of intra-operative-cavity ultrasonic image

Country Status (1)

Country Link
CN (1) CN117315133A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118397024A (en) * 2024-06-28 2024-07-26 西南医科大学附属医院 Automatic segmentation method and system for intracardiac department ultrasonic cardiogram

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118397024A (en) * 2024-06-28 2024-07-26 西南医科大学附属医院 Automatic segmentation method and system for intracardiac department ultrasonic cardiogram
CN118397024B (en) * 2024-06-28 2024-08-16 西南医科大学附属医院 Automatic segmentation method and system for intracardiac department ultrasonic cardiogram

Similar Documents

Publication Publication Date Title
US11690551B2 (en) Left atrium shape reconstruction from sparse location measurements using neural networks
AU2019281799B2 (en) System and method for lung-volume-gated x-ray imaging
JP4972648B2 (en) Catheter guidance system guided by sensor
US20190090774A1 (en) System and method for localization of origins of cardiac arrhythmia using electrocardiography and neural networks
CN110738701B (en) Tumor three-dimensional positioning system
US5889524A (en) Reconstruction of three-dimensional objects using labeled piecewise smooth subdivision surfaces
CN109598722B (en) Image analysis method based on recurrent neural network
CN111161241B (en) Liver image identification method, electronic equipment and storage medium
KR20090059048A (en) Anatomical modeling from a 3-d image and a surface mapping
Sermesant et al. Simulation of cardiac pathologies using an electromechanical biventricular model and XMR interventional imaging
US20230230230A1 (en) Image processing method and apparatus, and computer device, storage medium and mapping system
CN117315133A (en) Method and system for automatic segmentation and four-dimensional modeling of intra-operative-cavity ultrasonic image
CN101849842A (en) Method for executing three-dimensional cardiac ultrasound virtual endoscopy
CN112022345B (en) Three-dimensional visual preoperative planning method, system and terminal for abdominal wall defect reconstruction
Cheng et al. Rapid construction of a patient-specific torso model from 3D ultrasound for non-invasive imaging of cardiac electrophysiology
CN117197346A (en) Three-dimensional ultrasonic imaging processing method, system, electronic device and readable storage medium
CN115300809B (en) Image processing method and device, computer equipment and storage medium
CN111466952A (en) Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image
CN115553819A (en) Ultrasonic slice enhancement
CN114283179A (en) Real-time fracture far-near end space pose acquisition and registration system based on ultrasonic images
JP2022168851A (en) Automatic frame selection for 3d model construction
EP3614338B1 (en) Post-mapping automatic identification of pulmonary veins
CN114187299A (en) Efficient and accurate dividing method for ultrasonic positioning tumor images
von Berg et al. Accurate left atrium segmentation in multislice CT images using a shape model
EP4248859A1 (en) Systems, devices, components and methods for electroanatomical mapping of the heart using 3d reconstructions derived from biosignals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination