CN116363030A - Medical image processing method, medical image processing device, electronic equipment and storage medium - Google Patents

Medical image processing method, medical image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116363030A
CN116363030A CN202310144297.8A CN202310144297A CN116363030A CN 116363030 A CN116363030 A CN 116363030A CN 202310144297 A CN202310144297 A CN 202310144297A CN 116363030 A CN116363030 A CN 116363030A
Authority
CN
China
Prior art keywords
image
dimensional
target object
medical image
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310144297.8A
Other languages
Chinese (zh)
Inventor
周小虎
黄德兴
谢晓亮
刘市祺
奉振球
侯增广
桂美将
李�浩
项天宇
于喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202310144297.8A priority Critical patent/CN116363030A/en
Publication of CN116363030A publication Critical patent/CN116363030A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a medical image processing method, a medical image processing device, electronic equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring a two-dimensional medical image of a measured object, and performing target object segmentation on the two-dimensional medical image to obtain a first image; determining three-dimensional image transformation parameters based on the first image; acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on three-dimensional image transformation parameters to obtain a two-dimensional projection image; the second image is an image of a target object segmented from the three-dimensional image of the object to be measured; and performing centroid alignment on the target object in the first image and the two-dimensional projection image to obtain a two-dimensional fusion medical image. The technical scheme provided by the invention realizes the real-time accurate registration of the two-dimensional medical image and the three-dimensional medical image.

Description

Medical image processing method, medical image processing device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a medical image processing method, apparatus, electronic device, and storage medium.
Background
With the continuous development of medical technology, medical devices, particularly medical imaging devices, play an increasingly important role in medical procedures. For example, in interventional procedures, tracking of the progress of the procedure may be guided by medical images acquired by a medical imaging device.
Taking the interventional operation of a blood vessel as an example, the traditional imaging examination means cannot effectively evaluate the occlusion section of the blood vessel, in the interventional operation, a doctor can only deliver an interventional instrument according to the force feedback and self experience of a guide wire, and the direction of the guide wire is easy to make mistakes, so that the operation is forced to stop, and even serious complications are caused. With the development of magnetic resonance imaging techniques, the boundary of the vessel wall and surrounding tissue can be distinguished using high resolution vessel wall imaging (High resolution Vessel Wall Imaging, HR-VWI), which has been used for preoperative assessment of vessel occlusion segments. However, HR-VWI is a three-dimensional image, while intraoperative digital subtraction angiography imaging (Digital Subtraction Angiography, DSA) is a two-dimensional image, which has spatial dimension differences such that information about the vessel occlusion segment cannot be intuitively obtained from HR-VWI during interventional procedures. Based on the method, how to accurately register the three-dimensional medical image and the two-dimensional medical image in real time and realize the fusion of images of different modes, so as to provide visual information to assist doctors in interventional operation, and the method has important significance for the interventional operation and becomes a problem to be solved urgently at present.
Disclosure of Invention
The invention provides a medical image processing method, a medical image processing device, electronic equipment and a storage medium, which are used for solving the problem of how to accurately register medical images in two-dimensional and three-dimensional space dimensions in real time in the prior art.
The invention provides a medical image processing method, which comprises the following steps:
acquiring a two-dimensional medical image of a measured object, and performing target object segmentation on the two-dimensional medical image to obtain a first image;
determining three-dimensional image transformation parameters based on the first image;
acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on the three-dimensional image transformation parameters to obtain a two-dimensional projection image; the second image is an image of the target object segmented from the three-dimensional image of the object to be measured;
and aligning the mass centers of the first image and the target object in the two-dimensional projection image to obtain a two-dimensional fusion medical image.
According to the medical image processing method provided by the invention, the method for aligning the mass centers of the first image and the target object in the two-dimensional projection image to obtain a two-dimensional fusion medical image comprises the following steps:
respectively determining a first centroid coordinate of the target object in the first image and a second centroid coordinate of the target object in the two-dimensional projection image;
Determining a coordinate difference of the first centroid coordinate and the second centroid coordinate;
and aligning the centroids of the target objects in the first image and the two-dimensional projection image through translation transformation based on the coordinate difference value to obtain the two-dimensional fusion medical image.
According to the medical image processing method provided by the invention, the method for determining the three-dimensional image transformation parameters based on the first image comprises the following steps:
inputting the first image into a regressive model to obtain the three-dimensional image transformation parameters output by the regressive model;
the regressor model is obtained by training based on a sample two-dimensional medical image of the target object and three-dimensional image transformation parameter tag data corresponding to the sample two-dimensional medical image; the sample two-dimensional medical image is generated based on two-dimensional spatial projection of a first sample three-dimensional image of the target object.
According to the medical image processing method provided by the invention, the regressor model is trained based on the following steps:
acquiring the first sample three-dimensional image;
sampling the transformation parameters in a preset range based on a preset distribution mode to obtain sampling transformation parameters;
Performing two-dimensional space projection on the first sample three-dimensional image based on the sampling transformation parameters to obtain the sample two-dimensional medical image;
and training an initial regressor model based on the sample two-dimensional medical image and the three-dimensional image transformation parameter label data by taking the sampling transformation parameter as the three-dimensional image transformation parameter label data to obtain the regressor model.
The medical image processing method provided by the invention further comprises the following steps:
acquiring a three-dimensional image of the measured object, and extracting a cross-sectional image of the three-dimensional image;
inputting the cross-sectional images into a second image semantic segmentation model to obtain segmented images of the target object corresponding to each cross-sectional image output by the second image semantic segmentation model; the second image semantic segmentation model is used for segmenting the target object in the cross-sectional image;
and splicing the segmented images corresponding to the cross-sectional images to obtain the second image.
According to the medical image processing method provided by the invention, the second image semantic segmentation model is obtained based on training of the following steps:
Acquiring a sample cross-sectional image of the second sample three-dimensional image;
labeling the target object in the sample cross-sectional image to obtain label data corresponding to the sample cross-sectional image;
training a second initial image semantic segmentation model based on the sample cross-sectional image and the label data to obtain the second image semantic segmentation model.
According to the medical image processing method provided by the invention, the target object is segmented from the two-dimensional medical image to obtain a first image, and the medical image processing method comprises the following steps:
inputting the two-dimensional medical image into a first image semantic segmentation model to obtain the first image output by the first image semantic segmentation model; the first image semantic segmentation model is used for segmenting the target object in the two-dimensional medical image.
The present invention also provides a medical image processing apparatus comprising:
the acquisition module is used for acquiring a two-dimensional medical image of the tested object;
the segmentation module is used for carrying out target object segmentation on the two-dimensional medical image to obtain a first image;
a determining module for determining three-dimensional image transformation parameters based on the first image;
The conversion module is used for acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on the three-dimensional image transformation parameters to obtain a two-dimensional projection image; the second image is an image of the target object segmented from the three-dimensional image of the object to be measured;
and the alignment module is used for aligning the mass centers of the first image and the target object in the two-dimensional projection image to obtain a two-dimensional fusion medical image.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing any of the medical image processing methods described above when executing the computer program.
The invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a medical image processing method as described in any of the above.
According to the medical image processing method, the medical image processing device, the electronic equipment and the storage medium, a two-dimensional medical image of a detected object is firstly obtained, target object segmentation is carried out on the two-dimensional medical image to obtain a first image, and three-dimensional image transformation parameters are determined based on the first image; then, obtaining an image of a target object segmented from a three-dimensional image of the measured object, obtaining a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on three-dimensional image transformation parameters, so as to obtain a two-dimensional projection image; and then, aligning the mass centers of the first image and the target object in the two-dimensional projection image to obtain a two-dimensional fusion medical image. Therefore, three-dimensional image transformation parameters can be determined by utilizing the two-dimensional medical image, and the second image is projected from the three-dimensional space to the two-dimensional space based on the three-dimensional image transformation parameters, and the registering process only needs to be carried out once, so that the instantaneity of registering and fusing the images is ensured; and then the centroid alignment is carried out on the target object in the first image and the two-dimensional projection image, so that the error of the image transformation parameter can be further reduced, and the registration accuracy of the target object in the medical images with different dimensions is improved, thereby realizing the real-time accurate registration of the two-dimensional medical image and the three-dimensional medical image.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a medical image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of centroid alignment of a first image and a target object in a two-dimensional projection image in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of a method of training a regressor model in an embodiment of the present invention;
FIG. 4 is a flowchart of a method for processing a three-dimensional image of a measured object to obtain a second image according to an embodiment of the present invention;
FIG. 5 is a flowchart of a training method of a second image semantic segmentation model according to an embodiment of the present invention;
fig. 6 is a schematic structural view of a medical image processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, in the present invention, the numbers of the described objects, such as "first", "second", etc., are only used to distinguish the described objects, and do not have any sequence or technical meaning.
In interventional procedures, tracking of the procedure may be guided through medical images acquired by a medical imaging device. For example, using a chronic carotid occlusion (Chronic Carotid Artery Occlusion, CCAO) interventional procedure, a high resolution vessel wall imaging (HR-VWI) can be used to distinguish the boundaries of the vessel wall and surrounding tissue for preoperative assessment of the vessel occlusion segment. However, HR-VWI is a three-dimensional image and the digital subtraction angiography imaging (DSA) device used in surgery is a two-dimensional image, which have a difference in spatial dimension, so that information of the vessel occlusion segment cannot be intuitively obtained from HR-VWI in interventional surgery.
The medical imaging device may include a two-dimensional imaging device through which a two-dimensional (2D) medical image may be acquired and a three-dimensional imaging device through which a three-dimensional (3D) medical image may be acquired.
In the related art, a 2D-3D image registration technology can be used for realizing fusion of a preoperative 3D medical image and an intraoperative two-dimensional medical image, and visual information is provided for assisting a doctor in performing an operation. For example, an image registration method based on optimization may be adopted, the 3D image is converted into a two-dimensional reconstructed image by using an initialized registration parameter, the similarity between the two-dimensional reconstructed image and the intra-operative two-dimensional medical image is calculated, the image registration parameter is adjusted according to the similarity, and the image registration is re-registered according to the adjusted parameter, so that the optimization is iterated until a set of optimal parameters is obtained as a final registration result. However, this method requires frequent optimization of parameters and conversion of images, which is time-consuming, and the calculation operation speed cannot meet the real-time requirement of clinical surgery, and is sensitive to the initial values of the parameters, so that the robustness is poor. For another example, registration may be performed based on a convolutional neural network regression method, which, although capable of achieving a real-time inference speed, may have a large error due to the fact that the space of the transformation parameters is too large to accurately estimate the transformation parameters.
Based on the above, the embodiment of the invention provides a medical image processing method, which can acquire a two-dimensional medical image of a measured object, and divide the two-dimensional medical image into target objects to obtain a first image; determining three-dimensional image transformation parameters based on the first image; acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on three-dimensional image transformation parameters to obtain a two-dimensional projection image, wherein the second image is an image of a target object separated from a three-dimensional image of the object to be measured; and performing centroid alignment on the target object in the first image and the two-dimensional projection image to obtain a two-dimensional fusion medical image. By aligning the centroids of the first image and the target object in the two-dimensional projection image, the error of the image transformation parameters can be effectively reduced, the registration accuracy of the target object in the medical images with different space dimensions is improved, and the real-time and accurate registration of the images of all modes is realized.
In vascular interventional procedures, a physician may determine the location of the instrument in the vessel based on the DSA and deliver the instrument. Based on the medical image processing method provided by the embodiment of the invention, in vascular interventional operation, a two-dimensional image acquired in real time by DSA equipment can be registered and fused with a three-dimensional medical image shot by imaging equipment such as pre-operation HR-VWI, magnetic resonance imaging (Magnetic resonance imaging, MRI) or electronic computed tomography (Computed Tomography, CT) and the like, a two-dimensional fused medical image is obtained and displayed, and visual navigation can be provided for interventional operation according to the two-dimensional fused medical image to assist doctors in operation.
The medical image processing method of the present invention is described below with reference to fig. 1 to 5.
Fig. 1 schematically illustrates a flowchart of a medical image processing method according to an embodiment of the present invention, and referring to fig. 1, the medical image processing method may include the following steps 110 to 130.
Step 110: and acquiring a two-dimensional medical image of the measured object, and carrying out target object segmentation on the two-dimensional medical image to obtain a first image.
The object to be detected is an object photographed or detected by the medical imaging device, and may be a certain detection part of the patient photographed by the medical imaging device, for example, may be a neck blood vessel or a cardiac blood vessel of the patient. The medical imaging device can be used for shooting images in two-dimensional space.
The two-dimensional medical image is an image of a two-dimensional space obtained after the detection part of the patient is photographed by the medical imaging device. For example, after the arm of the human body is irradiated by using an X-ray machine, an X-ray image of the arm of the human body can be obtained, and the X-ray image is a medical image in a two-dimensional plane. For another example, after the neck of the human body is photographed by using a digital subtraction angiography X-ray machine, an angiography image of the neck of the human body, which is a medical image in a two-dimensional plane, can be obtained.
In interventional operation, the DSA device may acquire a two-dimensional medical image of a measured object in real time, and after acquiring the real-time two-dimensional medical image, may perform target object segmentation on the two-dimensional medical image to obtain a first image. The target object is an object of interest in interventional procedures, for example, in performing interventional procedures for carotid artery occlusion, a cervical artery vessel or an occlusion segment of a cervical artery vessel may be taken as the target object in a two-dimensional medical image of the cervical artery vessel.
For example, when the two-dimensional medical image is subjected to target object segmentation, the two-dimensional medical image acquired by the medical imaging device can be processed according to frames, and the target object in each frame of the two-dimensional medical image is segmented by using an image semantic segmentation algorithm, so that non-target objects are removed, and a first image is obtained. Alternatively, a target detection algorithm may be used to extract the target object from the two-dimensional medical image, resulting in the first image. For example, the DSA device may take an X-ray image of the neck of a patient with a carotid artery occlusion, obtain a two-dimensional medical image of the patient's neck blood vessel, then perform semantic segmentation on the two-dimensional medical image using a semantic segmentation algorithm, segment the carotid artery blood vessel or an occlusion segment of the carotid artery blood vessel as a target object, and remove other images except the target object from the image, so as to obtain a first image containing only the target object.
Step 120: three-dimensional image transformation parameters are determined based on the first image.
After the first image is obtained, three-dimensional image transformation parameters may be determined based on imaging information of the first image. The first image is a two-dimensional image obtained after two-dimensional space imaging is carried out on an object in a three-dimensional space, the imaging process comprises a conversion relation between the two-dimensional space and the three-dimensional space, and corresponding three-dimensional image conversion parameters can be determined by utilizing the conversion relation. For example, the three-dimensional image transformation parameters may be determined based on imaging information such as imaging angles of the first image, imaging device parameters, and the like. The three-dimensional image transformation parameters are used for representing the conversion relation between the three-dimensional space image and the two-dimensional space image.
For example, the process of solving the three-dimensional image transformation parameters can be modeled as a regression problem, the two-dimensional projection image of the three-dimensional image corresponding to the initialized transformation parameters and the initialized transformation parameters are used as known parameters, and regression estimation is performed on the three-dimensional image transformation parameters corresponding to the first image by using a regression algorithm.
Step 130: acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on three-dimensional image transformation parameters to obtain a two-dimensional projection image; the second image is an image of the target object segmented from the three-dimensional image of the object under test.
And shooting the three-dimensional image of the measured object to obtain the three-dimensional image of the measured object. And (3) carrying out target object segmentation on the obtained three-dimensional image to obtain a second image, wherein the second image is used for representing imaging of the target object in a three-dimensional space. For example, a three-dimensional image of the neck of the patient is obtained by performing Magnetic Resonance Imaging (MRI), computed Tomography (CT), high-resolution vessel wall imaging (HR-VWI), or the like on the neck of the patient, and a three-dimensional image of the vessel in the occlusion section can be obtained by performing target object segmentation on the three-dimensional image with the vessel in the occlusion section as a target object.
And projecting the second image from the three-dimensional space to the two-dimensional space by utilizing the three-dimensional image transformation parameters, so that a two-dimensional projection image of the second image under the view angle of the two-dimensional space can be obtained. For example, a two-dimensional projection image of a blood vessel of an occlusion segment of a human neck at a two-dimensional view angle can be obtained by two-dimensionally projecting a second image of the blood vessel of the occlusion segment only.
By way of example, with the aid of three-dimensional image transformation parameters, three-dimensional images can be projected by means of an image projection algorithm, resulting in a two-dimensional projection image. The image projection algorithm may include a ray projection algorithm, a maximum intensity projection algorithm, a snowball method, and the like, and the embodiment of the invention is not particularly limited.
For example, a three-dimensional image may be projected by means of a ray projection algorithm using three-dimensional image transformation parameters to obtain a two-dimensional projection image. Projecting the second image from three-dimensional space to two-dimensional space may be divided into two parts, including pixel value transformation and coordinate transformation.
For pixel value transformation, the core is based on lambert beer law, describing that the corresponding attenuation of X-ray irradiation on an object is generated due to factors such as thickness, medium and the like, finally, different pixel points are generated on a projection surface to form projection, and the projection principle that X-ray is emitted from a light source, attenuated by a three-dimensional object and finally projected on a two-dimensional plane can be described by the following formula (1):
Figure BDA0004088675880000101
wherein A is 0 Represents the intensity of X-rays emitted from a light source, lambda i Representing the attenuation coefficient of X-rays on a three-dimensional object i, D i The distance that the X-ray passes through in the three-dimensional object i is represented by H, the number of objects that the X-ray passes through is represented by a, and the intensity at which the X-ray finally irradiates on the two-dimensional plane is represented by a. For example, in one example embodiment of the invention, assuming that the target object is a carotid artery occlusion segment vessel, the three-dimensional object i may be the occlusion segment vessel, assuming that the occlusion segment vessel has only one segment, h=1.
For coordinate transformation, the following relationship (2) is satisfied:
Figure BDA0004088675880000102
where u and v are pixel coordinates of a two-dimensional image, (x) w y w z w 1) T In homogeneous form of three-dimensional coordinates of the object world,
Figure BDA0004088675880000103
for camera internal matrixWhen the camera is given, this value is known. I 3×3 Is an identity matrix. 0 3×1 Is a zero matrix.
Figure BDA0004088675880000104
A rigid body transformation matrix of a three-dimensional object, which is matched with a corresponding transformation parameter theta= (r) x ,r y ,r z ,t x ,t y ,t z ) The following relationships (3) and (4) are satisfied:
R 3×3 =R z R x R y (3)
t=(t x t y t z ) (4)
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004088675880000111
Figure BDA0004088675880000112
Figure BDA0004088675880000113
wherein R is 3×3 Representing a rotation matrix, r x ,r y ,r z Representing the rotation angle of the three-dimensional object along the three-dimensional coordinate system of the object world; t represents a translation matrix. The camera may be, for example, an X-ray machine, such as an X-ray machine of a DSA device.
Step 140: and performing centroid alignment on the target object in the first image and the two-dimensional projection image to obtain a two-dimensional fusion medical image.
Registering and aligning the mass center of the target object in the first image with the mass center of the target object in the two-dimensional projection image, so that the registering and aligning of the target object in the first image and the target object in the two-dimensional projection image can be realized, and a two-dimensional fusion medical image can be obtained after registering and aligning. The obtained two-dimensional fusion medical image contains the information of the target object in the first image and the information of the target object in the second image, the three-dimensional medical image can be utilized to supplement the information of the same target object in the two-dimensional medical image in operation, for example, DSA imaging of carotid blood vessels in operation is supplemented by carotid blood vessel occlusion segment information obtained by pre-operation HR-VWI, and a doctor can visually obtain the information of carotid blood vessels through the two-dimensional fusion medical image.
Illustratively, after a two-dimensional medical image of a measured object is obtained, a target object in the two-dimensional medical image is segmented to obtain a first image, and a first centroid position of the target object in the first image is determined. After obtaining the two-dimensional projection image of the second image, a second centroid position of the target object in the two-dimensional projection image is determined. It will be appreciated that the target object in the first image is the same target object as the target object in the two-dimensional projection image, e.g. the target object may be a segment of an occluded blood vessel. And aligning the first centroid position of the target object in the first image with the second centroid position of the target object in the two-dimensional projection image, so that a two-dimensional fusion medical image can be obtained. For example, two pictures of a certain segment of occluded blood vessel are aligned according to the centroid point of the blood vessel in the picture, and a fused blood vessel picture can be obtained.
The centroid is understood as the center of mass of the target object in the image, and can be defined as the average coordinates of all pixels of the target object. For example, taking the example that the target object is a carotid vessel, the mass of the vessel may be considered to be evenly distributed, and the centroid of the vessel may be defined as the average coordinates of all pixels of the vessel.
According to the medical image processing method provided by the embodiment of the invention, a two-dimensional medical image of a detected object is acquired firstly, target object segmentation is carried out on the two-dimensional medical image, a first image is obtained, and three-dimensional image transformation parameters are determined based on the first image; then, obtaining an image of a target object segmented from a three-dimensional image of the measured object, obtaining a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on three-dimensional image transformation parameters, so as to obtain a two-dimensional projection image; and then, aligning the mass centers of the first image and the target object in the two-dimensional projection image to obtain a two-dimensional fusion medical image. Therefore, three-dimensional image transformation parameters can be determined by utilizing the two-dimensional medical image, and the second image is projected from the three-dimensional space to the two-dimensional space based on the three-dimensional image transformation parameters, and the registering process only needs to be carried out once, so that the instantaneity of registering and fusing the images is ensured; and then the centroid alignment is carried out on the target object in the first image and the two-dimensional projection image, so that the error of the image transformation parameter can be further reduced, and the registration accuracy of the target object in the medical images with different dimensions is improved, thereby realizing the real-time accurate registration of the two-dimensional medical image and the three-dimensional medical image.
By the medical image processing method provided by the embodiment of the invention, the image information of the missing part of the target object in the two-dimensional medical image can be made up by utilizing the image information of the target object in the three-dimensional medical image, so that the accurate fusion of the multi-dimensional medical image is realized, and the smooth development of medical work is assisted.
Based on the medical image processing method according to the corresponding embodiment of fig. 1, fig. 2 schematically shows a flow chart of a method for centroid alignment of a first image and a target object in a two-dimensional projection image. Referring to fig. 2, the method may include the following steps 210 to 230.
Step 210: a first centroid coordinate of the target object in the first image and a second centroid coordinate of the target object in the two-dimensional projection image are determined, respectively.
And (3) respectively determining the mass centers of the target object in the first image obtained in the step (110) and the target object in the two-dimensional projection image obtained in the step (130), and obtaining the mass center coordinates of the target object in the two images through calculation. For example, the calculated centroid coordinates of the target object in the first image are determined as first centroid coordinates, and the calculated centroid coordinates of the target object in the two-dimensional projection image are determined as second centroid coordinates.
By way of example, the mass of the target object in the image may be considered to be uniformly distributed, and the centroid coordinates of the target object in the image may be obtained using a centroid calculation formula. For example, a first image of a target object that is a occluded blood vessel is taken as input, and because the first image contains only blood vessels, the image is a binary image, the blood vessel quality can be considered as uniformly distributed, and a first centroid coordinate of the target object in the first image and a second centroid coordinate of the target object in the two-dimensional projection image can be obtained by using a centroid coordinate calculation formula. The calculation formula of the centroid coordinates can be expressed as the following formula (5):
Figure BDA0004088675880000131
wherein x is c And y c Respectively representing the coordinates of the target object on the x-axis and the y-axis, and num (p) represents the number of pixel points occupied by the target object in the image, and Σ p x p Representing the sum of the coordinates of all pixels of the target object on the x-axis, Σ p y p Representing the sum of the coordinates on the y-axis of all pixel points representing the target object. The calculation method of the centroid coordinates in the embodiment of the present invention is not limited thereto.
Step 220: a coordinate difference of the first centroid coordinate and the second centroid coordinate is determined.
And calculating a difference value by using the calculated first centroid coordinate and the calculated second centroid coordinate to obtain a coordinate difference value of the first centroid coordinate and the calculated second centroid coordinate. The coordinate difference may be a vector in a planar coordinate system. The first centroid coordinate may be subtracted from the second centroid coordinate, or the first centroid coordinate may be subtracted from the second centroid coordinate.
Step 230: based on the coordinate difference, aligning the mass centers of the target objects in the first image and the two-dimensional projection image through translation transformation to obtain a two-dimensional fusion medical image.
The translation transformation is to translate a known coordinate point according to a translation vector, and obtain the moved coordinate point in a plane or space.
The calculated coordinate difference is illustratively considered as a translation vector, and the first centroid coordinate or the second centroid coordinate is translated according to the translation vector so that the first centroid coordinate coincides with the second centroid coordinate. For example, the first image is a 2D image including only a target blood vessel captured and processed in real time by using the surgical navigation system during surgery, and the two-dimensional projection image is a 2D image projected into a two-dimensional space from a 3D image including only the target blood vessel captured and processed by the pre-operative CT. A first centroid coordinate of the first image and a second centroid coordinate of the two-dimensional projection image are calculated, respectively. And aligning the mass center of the target blood vessel in the two-dimensional projection image with the mass center of the target blood vessel of the intraoperative two-dimensional image through translation transformation, so as to obtain the final fused two-dimensional fusion medical image. Wherein the translation transformation satisfies the following formula (6):
Figure BDA0004088675880000141
Wherein x and y are two-dimensional projection images
Figure BDA0004088675880000142
Coordinates, x And y The two-dimensional projection image coordinates after translation transformation; />
Figure BDA0004088675880000143
Figure BDA0004088675880000144
Coordinate values representing the first centroid on the x-axis, are shown>
Figure BDA0004088675880000145
Representing the coordinate value of the second centroid on the x-axis, and deltax representing the coordinate difference between the first centroid coordinate and the second centroid coordinate on the x-axis; />
Figure BDA0004088675880000146
Figure BDA0004088675880000147
Coordinate values representing the first centroid on the y-axis, are shown>
Figure BDA0004088675880000148
Representing the coordinate value of the second centroid on the y-axis, Δy representing the coordinate difference between the first centroid coordinate and the second centroid coordinate on the y-axis.
Through translation transformation of the first centroid coordinates or the second centroid coordinates, the first image or the two-dimensional projection image can be integrally translated, so that the first image and the two-dimensional projection image are registered and aligned and then fused, and a two-dimensional fusion medical image is obtained.
Based on the medical image processing method of the corresponding embodiment of fig. 1, in an example embodiment, determining the three-dimensional image transformation parameters based on the first image may include:
and inputting the first image into a regressor model to obtain three-dimensional image transformation parameters output by the regressor model. The regressor model is obtained by training based on a sample two-dimensional medical image of a target object and three-dimensional image transformation parameter tag data corresponding to the sample two-dimensional medical image; the sample two-dimensional medical image is generated based on two-dimensional spatial projection of a first sample three-dimensional image of the target object.
And inputting a first image into the trained regressor model, and calculating and outputting three-dimensional image transformation parameters by using the regressor model. The regressor model may be a neural network model that results from training the initial regressor model. When the regressor model is trained, a sample two-dimensional medical image only containing a target object can be used as an input layer sample of the model, and three-dimensional image transformation parameter label data corresponding to the sample two-dimensional medical image can be used as an output layer target of the model for training.
For example, the sample two-dimensional medical image may be generated based on two-dimensional spatial projection of a first sample three-dimensional image of the target object. For example, a two-dimensional spatial projection can be performed on the first sample three-dimensional image to obtain a sample two-dimensional medical image, and the three-dimensional image transformation parameters during projection are recorded. The two-dimensional medical image is a sample two-dimensional medical image, and the recorded three-dimensional image transformation parameters are three-dimensional image transformation parameter label data corresponding to the sample two-dimensional medical image.
By way of example, the initial regressor model may be, but is not limited to, a convolutional neural network (Convolutional Neural Network, CNN), a deep neural network (Deep Neural Networks, DNN), or a recurrent neural network (Recurrent Neural Networks, RNN), among others.
For example, a CNN convolutional neural network may be used as an initial regressor model, and training may be performed based on a sample two-dimensional medical image including a target object and three-dimensional image transformation parameter label data corresponding to the sample two-dimensional medical image, to obtain the regressor model. When the method is applied in a rational way, a first image containing a target object is input in real time at an input layer of a regressive model, and three-dimensional image transformation parameters corresponding to the first image can be obtained at an output layer of the regressive model.
Taking a regressor model as an example, a two-dimensional image shot by a surgical navigation system in real time can be subjected to target object segmentation to obtain a first image, and the first image is input into a trained convolutional neural network to obtain a three-dimensional image transformation parameter output by the convolutional neural network.
In order to achieve the purpose of outputting three-dimensional image transformation parameters, the process of solving the three-dimensional image transformation parameters can be modeled as a regression problem (regression problem), and the mapping relation between the input two-dimensional image and the three-dimensional image transformation parameters is established by means of the nonlinear fitting capability of the neural network. Regression problems refer to statistical methods that study the relationship between one set of random variables and another set of random variables. Setting the intraoperative two-dimensional image to be registered, i.e. the first image as
Figure BDA0004088675880000161
The corresponding three-dimensional image transformation parameter has the true value of theta gt The value is the value to be solved, and the random initialization transformation parameter is set as theta init Corresponding two-dimensional projection corresponding to three-dimensional image can be generated by using light projection algorithm>
Figure BDA0004088675880000162
The regression problem can be modeled as equation (7) below:
Figure BDA0004088675880000163
where f (-) represents the regressor model, E is the capture range of the regressor model,
Figure BDA0004088675880000164
representing the true value theta of the three-dimensional image transformation parameter gt And randomly initializing the transformation parameter theta init Is a difference in (c). When->
Figure BDA0004088675880000165
As is known, the regressor model determines that the first image +.>
Figure BDA0004088675880000166
Regression estimation values of the transformation parameters, namely three-dimensional image transformation parameters output by the regressor model. For convenience, may be set to θ init =0, then equation (7) can be reduced to equation (8):
Figure BDA0004088675880000167
therefore, the trained convolutional neural network can be used as a regressive model f (·), and the three-dimensional image transformation parameters can be obtained by inputting the first image into the regressive model f (·).
For example, for the initial regressor model, a LeakyReLU function may be used as the activation function, where the LeakyReLU activation function still has a non-zero output when the input is negative, which may solve the problem of neuronal death. The layer structure of the initial regressor model can be adjusted by adopting residual connection, so that the gradient disappearance phenomenon in the training process can be relieved. Spatial adaptive normalized semantic image synthesis (SPATIALY-Adaptive Normalization, SPADE) may also be used at the normalization layer of the initial regressor model to better capture the semantic features of the input image. It can be appreciated that the structure of the initial regressor model can be optimally designed by at least one optimization scheme, so that the regressor model has better parameter estimation performance.
In the embodiment of the invention, the supervised model training can be performed on the regressor model, so that a batch of data with labels is required to be used for training during training. For example, there may be a batch of intra-operative DSA images with the actual values of the transformation parameters, but training using DSA images is a great difficulty due to the difficulty in labeling and the small number of intra-operative DSA images provided. Thus, in another example embodiment, training may be performed by employing simulation parameters. Specifically, sampling transformation parameters can be obtained based on a preset distribution mode, the sampling transformation parameters are used for projecting the first sample three-dimensional image, a batch of simulation images with the true values of the transformation parameters are generated, and the simulation images are used as sample two-dimensional medical images.
In an exemplary embodiment, fig. 3 schematically illustrates a flowchart of a training method of a regressor model in an embodiment of the present invention, where the training process may sample transformation parameters based on a preset distribution manner. Referring to fig. 3, the regressor model may be trained by the following steps 310 to 340.
Step 310: a first sample three-dimensional image is acquired.
A first sample three-dimensional image is acquired, the first sample three-dimensional image being a three-dimensional image that contains only the target object. For example, a three-dimensional image is obtained by photographing a three-dimensional image of a measured object, dividing the three-dimensional image into target objects, and removing unnecessary image information to obtain a three-dimensional image containing only the target object, wherein the three-dimensional image is the first sample three-dimensional image.
Step 320: sampling the transformation parameters in a preset range based on a preset distribution mode to obtain sampling transformation parameters.
And acquiring a three-dimensional image transformation parameter in a preset range, and sampling the three-dimensional image transformation parameter based on a preset distribution mode to obtain a sampling transformation parameter for training. The preset range can be determined based on experience or experimental verification, for example, the three-dimensional image transformation parameters in the range with good transformation effect are used as the three-dimensional image transformation parameters in the preset range. The preset distribution manner may include, but is not limited to, gaussian distribution or chi-square distribution, for example.
Step 330: and carrying out two-dimensional space projection on the first sample three-dimensional image based on the sampling transformation parameters to obtain a sample two-dimensional medical image.
And (3) carrying out two-dimensional space projection on the first sample three-dimensional image based on the sampling transformation parameters obtained in the step 320, and obtaining a sample two-dimensional medical image. For example, n sets of sampling transformation parameters are obtained by sampling in step 320, and two-dimensional spatial projection is performed on the first sample three-dimensional image based on the n sets of sampling transformation parameters, so as to obtain n sample two-dimensional medical images. It will be appreciated that the n sets of sampling transformation parameters may be different, and thus, two-dimensional medical images of the first sample three-dimensional image at n different imaging perspectives may be obtained.
Step 340: and training the initial regressor model based on the sample two-dimensional medical image and the three-dimensional image transformation parameter label data by taking the sampling transformation parameter as the three-dimensional image transformation parameter label data to obtain the regressor model.
In the training stage of the regressor model, a supervised training mode is adopted, the transformation parameters obtained in sampling are used as three-dimensional image transformation parameter label data, and the initial regressor model is trained based on the sample two-dimensional medical image and the three-dimensional image transformation parameter label data, so that the regressor model capable of outputting the three-dimensional image transformation parameters is obtained.
In one example embodiment, a regression loss function may be used
Figure BDA0004088675880000181
Training a regressor model defined as equation (9):
Figure BDA0004088675880000182
where Q represents the number of samples.
Figure BDA0004088675880000183
Represents the ithRegression model output value of sample, +.>
Figure BDA0004088675880000184
Representing the true value of the three-dimensional image transformation parameter of the ith sample.
Based on the medical image processing method of the above embodiments, in an exemplary embodiment, fig. 4 is a schematic flow chart illustrating a method for processing a three-dimensional image of a measured object to obtain a second image. Referring to fig. 4, the method may include the following steps 410 to 430.
Step 410: and acquiring a three-dimensional image of the measured object, and extracting a cross-sectional image of the three-dimensional image.
And shooting the three-dimensional image of the measured object to obtain the three-dimensional image of the measured object. The three-dimensional image may be segmented according to the cross-section to obtain a plurality of cross-sectional images. For example, CT scan is performed on the neck of the human body, and a plurality of cross-sectional images can be obtained by performing cross-sectional planes on the three-dimensional image of the neck of the human body obtained after the image is captured.
Alternatively, the three-dimensional image may be preprocessed, and the cross-sectional image of the preprocessed three-dimensional image may be extracted. The preprocessing may include limiting pixel values and normalization processing.
For example, a three-dimensional image of a subject is obtained by a medical imaging device such as MRI or CT before an operation, and is preprocessed. Specific preprocessing modes include limiting pixel values and normalization. Let the maximum value of the pixel value be set to B max Minimum value is B min Pixel values outside this range may be readjusted. The pixel normalization may be as formula (10):
Figure BDA0004088675880000191
where B is the size of the pixel before normalization and B' is the size of the pixel after normalization. Maximum value B of pixel value max And minimum value B min The setting can be performed according to actual conditions. For example, prior to interventional procedures, the procedureThe maximum value B can be set by personnel according to the experience value max And minimum value B min
Step 420: inputting the cross-sectional images into a second image semantic segmentation model to obtain segmented images of the target object corresponding to each cross-sectional image output by the second image semantic segmentation model; the second image semantic segmentation model is used for segmenting the target object in the cross-sectional image.
The second image semantic segmentation model is used for segmenting the target object in the cross-sectional image and can be obtained through training the second initial image semantic segmentation model. When the trained second image semantic segmentation model is utilized for reasoning, the cross-sectional image of the three-dimensional image extracted in the step 410 is taken as input, and the segmented image of the target object corresponding to each cross-sectional image can be obtained.
In an exemplary embodiment, fig. 5 illustrates a flow diagram of a training method of the second image semantic segmentation model. Referring to fig. 5, a method for obtaining a second image semantic segmentation model through training may include the following steps 510 to 530.
Step 510: and obtaining a sample cross-sectional image of the second sample three-dimensional image.
The second sample three-dimensional image may be a historical three-dimensional image of the object under test. Cross-sectional images may be extracted from the historical three-dimensional images to obtain a plurality of sample cross-sectional images. For example, taking a neck blood vessel as an example, a plurality of blood vessel cross-sectional images can be obtained by cross-sectioning a historical CT image of the neck blood vessel, and the plurality of blood vessel cross-sectional images are taken as sample cross-sectional images.
Step 520: and labeling the target object in the sample cross-sectional image to obtain label data corresponding to the sample cross-sectional image.
Labeling the sample cross-sectional image obtained in step 510 by using a labeling process of the target object, so as to obtain label data corresponding to the sample cross-sectional image. The tag data may be output as a sample target for model training. For example, after the CT image of the neck blood vessel is subjected to cross-sectional segmentation processing, a plurality of blood vessel cross-sectional images are obtained, and the blood vessel is marked in the blood vessel cross-sectional images, so that the label data corresponding to the sample cross-sectional images can be obtained.
Step 530: training the second initial image semantic segmentation model based on the sample cross-sectional image and the label data to obtain a second image semantic segmentation model.
And performing supervised training on the second initial image semantic segmentation model by using the sample cross-sectional image of the second sample three-dimensional image obtained in the step 510 and the label data corresponding to the sample cross-sectional image obtained in the step 520, so as to obtain a second image semantic segmentation model. Specifically, the sample cross-sectional image can be used as an input layer sample of the second initial image semantic segmentation model, tag data corresponding to the sample cross-sectional image is used as an output layer target of the second initial image semantic segmentation model, and the second initial image semantic segmentation model is subjected to supervised training to obtain the second image semantic segmentation model.
The second initial image semantic segmentation model may be a neural network model of an encoder-decoder (encoder-decoder) structure, and the neural network model may be a model composed of at least one of a convolutional neural network (Convolutional Neural Network, CNN), a cyclic neural network (Recurrent Neural Networks, RNN), a Long short-term memory (LSTM), and a deep neural network (Deep Neural Networks, DNN), but is not limited thereto.
The loss function training the semantic segmentation model may be, for example, a hybrid loss function
Figure BDA0004088675880000211
It is defined as formula (11):
Figure BDA0004088675880000212
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004088675880000213
for the cross entropy loss function (cross entropy loss), which is defined as formula (12):
Figure BDA0004088675880000214
wherein N is the number of pixels, and M is the number of categories; i ic As an indicative function, it means that the value of pixel i is 1 if its true class is c, and 0 otherwise. For example, in the embodiment of the present invention, the categories may be classified into two categories, namely, background and target object, and the target object category may be represented by c, if the real category of the pixel I is the target object, such as carotid blood vessel, then I ic The value is 1.
Lambda in equation (11) is an adjustable parameter, which can be empirically determined in advance,
Figure BDA0004088675880000215
is a dice loss function (dice) defined as equation (13): />
Figure BDA0004088675880000216
Wherein R represents the segmentation result, and G represents the true value of the label.
Exemplary, can be based on the above described mixing loss function
Figure BDA0004088675880000217
The semantic segmentation model is trained using gradient descent.
Step 430: and splicing the segmented images corresponding to the cross-sectional images to obtain a second image.
And (3) sequentially stitching the segmented images of the target object corresponding to each cross-sectional image obtained in the step 420, so as to obtain a second image.
The method comprises the steps of firstly carrying out cross section segmentation processing on a three-dimensional image to form cross section images, respectively carrying out object semantic segmentation on each cross section image, and then splicing segmented images subjected to object semantic segmentation to obtain a second image, wherein the second image only contains an object. Therefore, the three-dimensional image is segmented into the two-dimensional image and then subjected to target object segmentation processing, so that the direct semantic segmentation processing of the three-dimensional image is avoided, a large amount of computing resources can be saved, the speed of semantic segmentation of the image is improved, and the generation of the second image is accelerated.
Based on the medical image processing method of the above embodiments, in an example embodiment, segmenting the target object from the two-dimensional medical image, obtaining the first image may include: inputting the two-dimensional medical image into a first image semantic segmentation model to obtain a first image output by the first image semantic segmentation model; the first image semantic segmentation model is used for segmenting a target object in the two-dimensional medical image.
For example, a two-dimensional medical image captured in real time by the intra-operative surgical navigation system may be input to the first image semantic segmentation model, and then the first image obtained by segmenting the target object may be obtained.
For example, the first initial image semantic segmentation model may be trained based on the sample two-dimensional medical image of the object under test and tag data corresponding to the sample two-dimensional medical image of the object under test, to obtain the first image semantic segmentation model.
The sample two-dimensional medical image of the measured object can be a frame image of a history two-dimensional image of the measured object, and target objects in the frame images can be marked to obtain corresponding tag data.
And performing supervised training on the first initial image semantic segmentation model by using the sample two-dimensional medical image of the tested object and the label data corresponding to the sample two-dimensional medical image of the tested object, so as to obtain the first image semantic segmentation model. Specifically, a sample two-dimensional medical image of a measured object can be used as an input sample of a first initial image semantic segmentation model input layer, tag data corresponding to the sample two-dimensional medical image of the measured object is used as a target output of a first initial image semantic segmentation model output layer, and supervised training is performed on the first initial image semantic segmentation model to obtain a first image semantic segmentation model.
The above described mixture loss function may also be used when training the first initial image semantic segmentation model
Figure BDA0004088675880000221
The semantic segmentation model is trained and will not be described in detail here.
The first initial image semantic segmentation model may be a neural network model of an encoder-decoder (encoder-decoder) structure, which may be a model composed of at least one of a Convolutional Neural Network (CNN), a cyclic neural network (RNN), a long short-term memory neural network (LSTM), and a Deep Neural Network (DNN), but is not limited thereto.
In the embodiment of the invention, the two-dimensional medical image can be subjected to semantic segmentation processing by using the trained first image semantic segmentation model, the three-dimensional medical image can be subjected to semantic segmentation processing by using the trained second image semantic segmentation model, and target objects for registration are respectively extracted. Both initial image semantic segmentation models can be trained using neural network models of encoder-decoder architecture. In the embodiment of the invention, when the semantic segmentation is carried out on the images with different space dimensions, different image semantic segmentation models can be used for obtaining the segmented image result, so that the image semantic segmentation models based on the two-dimensional image and the three-dimensional image can be respectively trained.
According to the medical image processing method provided by the embodiment of the invention, the three-dimensional image transformation parameters are determined by utilizing the two-dimensional medical image, the second image is projected from the three-dimensional space to the two-dimensional space based on the three-dimensional image transformation parameters, and the registering process only needs to be carried out once, so that the instantaneity of registering and fusing the images is ensured; and then the centroid alignment is carried out on the target object in the first image and the two-dimensional projection image, so that the error of the image transformation parameter can be further reduced, and the registration accuracy of the target object in the medical images with different dimensions is improved, thereby realizing the real-time accurate registration of the two-dimensional medical image and the three-dimensional medical image. By the medical image processing method provided by the embodiment of the invention, the image information of the missing part of the target object in the two-dimensional medical image can be made up by utilizing the image information of the target object in the three-dimensional medical image, so that the information fusion of the multi-mode images is realized, and a doctor is assisted in performing operation.
The medical image processing apparatus provided by the present invention will be described below, and the medical image processing apparatus described below and the medical image processing method described above may be referred to correspondingly to each other.
Fig. 6 schematically illustrates a structural diagram of a medical image processing apparatus according to an embodiment of the present invention, and referring to fig. 6, a medical image processing apparatus 600 may include: an acquisition module 610, configured to acquire a two-dimensional medical image of a measured object; a segmentation module 620, configured to segment the two-dimensional medical image by using a target object to obtain a first image; a determining module 630, configured to determine three-dimensional image transformation parameters based on the first image; the conversion module 640 is configured to obtain a second image, and project the second image from a three-dimensional space to a two-dimensional space based on the three-dimensional image transformation parameter, so as to obtain a two-dimensional projection image; the second image is an image of the target object segmented from the three-dimensional image of the object to be measured; an alignment module 650, configured to perform centroid alignment on the first image and the target object in the two-dimensional projection image, so as to obtain a two-dimensional fusion medical image.
In one example embodiment, the alignment module 650 may include: a first determining unit, configured to determine a first centroid coordinate of the target object in the first image and a second centroid coordinate of the target object in the two-dimensional projection image, respectively; a second determining unit configured to determine a coordinate difference between the first centroid coordinate and the second centroid coordinate; and the alignment unit is used for aligning the mass centers of the target objects in the first image and the two-dimensional projection image through translation transformation based on the coordinate difference value to obtain a two-dimensional fusion medical image.
In an exemplary embodiment, the determining module 630 is configured to input the first image into a regressor model, and obtain three-dimensional image transformation parameters output by the regressor model; the regressor model is obtained by training based on a sample two-dimensional medical image of the target object and three-dimensional image transformation parameter tag data corresponding to the sample two-dimensional medical image; the sample two-dimensional medical image is generated based on two-dimensional spatial projection of a first sample three-dimensional image of the target object.
In an example embodiment, the medical image processing apparatus 600 may further include: and the first training module is used for training the regressor model. Specifically, the first training module may include: the acquisition unit is used for acquiring a first sample three-dimensional image; the sampling unit is used for sampling the transformation parameters in a preset range based on a preset distribution mode to obtain sampling transformation parameters; the projection unit is used for carrying out two-dimensional space projection on the first sample three-dimensional image based on the sampling transformation parameters to obtain a sample two-dimensional medical image; and training the initial regressor model based on the sample two-dimensional medical image and the three-dimensional image transformation parameter label data by taking the sampling transformation parameter as the three-dimensional image transformation parameter label data to obtain the regressor model.
In an example embodiment, the medical image processing apparatus 600 may further include: the cross section extraction module is used for acquiring the three-dimensional image of the measured object and extracting a cross section image of the three-dimensional image; the second semantic segmentation module is used for inputting the cross-sectional images into a second image semantic segmentation model to obtain segmented images of the target object corresponding to each cross-sectional image output by the second image semantic segmentation model; the second image semantic segmentation model is used for segmenting a target object in the cross-sectional image; and the splicing module is used for splicing the segmented images corresponding to the cross-section images to obtain a second image.
In an example embodiment, the medical image processing apparatus 600 may further include: and the second training module is used for training a second image semantic segmentation model. Specifically, the second training module may include: the acquisition unit is used for acquiring a sample cross-sectional image of the second sample three-dimensional image; the labeling unit is used for labeling the target object in the sample cross-sectional image to obtain label data corresponding to the sample cross-sectional image; training the second initial image semantic segmentation model based on the sample cross-sectional image and the label data to obtain a second image semantic segmentation model.
In an example embodiment, the medical image processing apparatus 600 may further include: the first semantic segmentation module is used for segmenting a target object from the two-dimensional medical image to obtain a first image; the first semantic segmentation module is specifically used for inputting the two-dimensional medical image into the first image semantic segmentation model to obtain a first image output by the first image semantic segmentation model; the first image semantic segmentation model is used for segmenting a target object in the two-dimensional medical image.
Fig. 7 illustrates a schematic structural diagram of an electronic device, which may include: processor 710, communication interface (Communication Interface) 720, memory 730, and communication bus 740, wherein processor 710, communication interface 720, and memory 730 may communicate with each other via communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform the medical image processing methods provided by the method embodiments described above, which may include, for example: acquiring a two-dimensional medical image of a measured object, and performing target object segmentation on the two-dimensional medical image to obtain a first image; determining three-dimensional image transformation parameters based on the first image; acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on three-dimensional image transformation parameters to obtain a two-dimensional projection image; the second image is an image of a target object segmented from the three-dimensional image of the object to be measured; and performing centroid alignment on the target object in the first image and the two-dimensional projection image to obtain a two-dimensional fusion medical image.
The electronic device may comprise a DSA device or an image-guided based interventional surgical device, for example.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In still another aspect, the present invention further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the medical image processing method provided by the above-mentioned method embodiments, and the method may, for example, include: acquiring a two-dimensional medical image of a measured object, and performing target object segmentation on the two-dimensional medical image to obtain a first image; determining three-dimensional image transformation parameters based on the first image; acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on three-dimensional image transformation parameters to obtain a two-dimensional projection image; the second image is an image of a target object segmented from the three-dimensional image of the object to be measured; and performing centroid alignment on the target object in the first image and the two-dimensional projection image to obtain a two-dimensional fusion medical image.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A medical image processing method, comprising:
acquiring a two-dimensional medical image of a measured object, and performing target object segmentation on the two-dimensional medical image to obtain a first image;
determining three-dimensional image transformation parameters based on the first image;
acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on the three-dimensional image transformation parameters to obtain a two-dimensional projection image; the second image is an image of the target object segmented from the three-dimensional image of the object to be measured;
and aligning the mass centers of the first image and the target object in the two-dimensional projection image to obtain a two-dimensional fusion medical image.
2. The medical image processing method according to claim 1, wherein the performing centroid alignment on the target object in the first image and the two-dimensional projection image to obtain a two-dimensional fused medical image includes:
respectively determining a first centroid coordinate of the target object in the first image and a second centroid coordinate of the target object in the two-dimensional projection image;
determining a coordinate difference of the first centroid coordinate and the second centroid coordinate;
and aligning the centroids of the target objects in the first image and the two-dimensional projection image through translation transformation based on the coordinate difference value to obtain the two-dimensional fusion medical image.
3. The medical image processing method of claim 1, wherein the determining three-dimensional image transformation parameters based on the first image comprises:
inputting the first image into a regressive model to obtain the three-dimensional image transformation parameters output by the regressive model;
the regressor model is obtained by training based on a sample two-dimensional medical image of the target object and three-dimensional image transformation parameter tag data corresponding to the sample two-dimensional medical image; the sample two-dimensional medical image is generated based on two-dimensional spatial projection of a first sample three-dimensional image of the target object.
4. A medical image processing method according to claim 3, wherein the regressor model is trained based on the following steps:
acquiring the first sample three-dimensional image;
sampling the transformation parameters in a preset range based on a preset distribution mode to obtain sampling transformation parameters;
performing two-dimensional space projection on the first sample three-dimensional image based on the sampling transformation parameters to obtain the sample two-dimensional medical image;
and training an initial regressor model based on the sample two-dimensional medical image and the three-dimensional image transformation parameter label data by taking the sampling transformation parameter as the three-dimensional image transformation parameter label data to obtain the regressor model.
5. The medical image processing method according to any one of claims 1 to 4, further comprising:
acquiring a three-dimensional image of the measured object, and extracting a cross-sectional image of the three-dimensional image;
inputting the cross-sectional images into a second image semantic segmentation model to obtain segmented images of the target object corresponding to each cross-sectional image output by the second image semantic segmentation model; the second image semantic segmentation model is used for segmenting the target object in the cross-sectional image;
And splicing the segmented images corresponding to the cross-sectional images to obtain the second image.
6. The medical image processing method according to claim 5, wherein the second image semantic segmentation model is trained based on the steps of:
acquiring a sample cross-sectional image of the second sample three-dimensional image;
labeling the target object in the sample cross-sectional image to obtain label data corresponding to the sample cross-sectional image;
training a second initial image semantic segmentation model based on the sample cross-sectional image and the label data to obtain the second image semantic segmentation model.
7. The medical image processing method according to any one of claims 1 to 4, wherein segmenting the target object from the two-dimensional medical image to obtain a first image comprises:
inputting the two-dimensional medical image into a first image semantic segmentation model to obtain the first image output by the first image semantic segmentation model; the first image semantic segmentation model is used for segmenting the target object in the two-dimensional medical image.
8. A medical image processing apparatus, comprising:
The acquisition module is used for acquiring a two-dimensional medical image of the tested object;
the segmentation module is used for carrying out target object segmentation on the two-dimensional medical image to obtain a first image;
a determining module for determining three-dimensional image transformation parameters based on the first image;
the conversion module is used for acquiring a second image, and projecting the second image from a three-dimensional space to a two-dimensional space based on the three-dimensional image transformation parameters to obtain a two-dimensional projection image; the second image is an image of the target object segmented from the three-dimensional image of the object to be measured;
and the alignment module is used for aligning the mass centers of the first image and the target object in the two-dimensional projection image to obtain a two-dimensional fusion medical image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the medical image processing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the medical image processing method according to any one of claims 1 to 7.
CN202310144297.8A 2023-02-17 2023-02-17 Medical image processing method, medical image processing device, electronic equipment and storage medium Pending CN116363030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310144297.8A CN116363030A (en) 2023-02-17 2023-02-17 Medical image processing method, medical image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310144297.8A CN116363030A (en) 2023-02-17 2023-02-17 Medical image processing method, medical image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116363030A true CN116363030A (en) 2023-06-30

Family

ID=86917687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310144297.8A Pending CN116363030A (en) 2023-02-17 2023-02-17 Medical image processing method, medical image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116363030A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649350A (en) * 2024-01-29 2024-03-05 天津恒宇医疗科技有限公司 Fusion method, device and equipment of intravascular image and contrast image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649350A (en) * 2024-01-29 2024-03-05 天津恒宇医疗科技有限公司 Fusion method, device and equipment of intravascular image and contrast image
CN117649350B (en) * 2024-01-29 2024-05-03 天津恒宇医疗科技有限公司 Fusion method, device and equipment of intravascular image and contrast image

Similar Documents

Publication Publication Date Title
US20210133999A1 (en) Augmenting image data of medically invasive devices having non-medical structures
CN108744306A (en) Subject positioning device, subject localization method, subject finder and radiation treatment systems
EP3509013A1 (en) Identification of a predefined object in a set of images from a medical image scanner during a surgical procedure
JP5335280B2 (en) Alignment processing apparatus, alignment method, program, and storage medium
US9240046B2 (en) Method and system to assist 2D-3D image registration
Song et al. Locally rigid, vessel-based registration for laparoscopic liver surgery
US11961193B2 (en) Method for controlling a display, computer program and mixed reality display device
US20130190602A1 (en) 2d3d registration for mr-x ray fusion utilizing one acquisition of mr data
JP2011502681A (en) System and method for quantitative 3DCEUS analysis
US10977390B2 (en) Anonymisation of medical patient images using an atlas
US10515449B2 (en) Detection of 3D pose of a TEE probe in x-ray medical imaging
CN116363030A (en) Medical image processing method, medical image processing device, electronic equipment and storage medium
CN109350059B (en) Combined steering engine and landmark engine for elbow auto-alignment
US20220000442A1 (en) Image orientation setting apparatus, image orientation setting method, and image orientation setting program
CN113538419B (en) Image processing method and system
Wei et al. Towards fully automatic 2D US to 3D CT/MR Registration: A novel segmentation-based Strategy
EP3910597A1 (en) Body representations
US11138736B2 (en) Information processing apparatus and information processing method
CN114494364A (en) Liver three-dimensional ultrasonic and CT image registration initialization method and device and electronic equipment
CN112790778A (en) Collecting mis-alignments
JP2007296341A (en) System and method for determining distal end of catheter with x-ray base
Kuhn Aim project a2003: Computer vision in radiology (covira)
JP5706933B2 (en) Processing apparatus, processing method, and program
US11430203B2 (en) Computer-implemented method for registering low dimensional images with a high dimensional image, a method for training an aritificial neural network useful in finding landmarks in low dimensional images, a computer program and a system for registering low dimensional images with a high dimensional image
CN111803111B (en) Brain blood vessel display device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination