CN118014923A - Medical image fusion method, system, electronic device and storage medium - Google Patents
Medical image fusion method, system, electronic device and storage medium Download PDFInfo
- Publication number
- CN118014923A CN118014923A CN202211400094.2A CN202211400094A CN118014923A CN 118014923 A CN118014923 A CN 118014923A CN 202211400094 A CN202211400094 A CN 202211400094A CN 118014923 A CN118014923 A CN 118014923A
- Authority
- CN
- China
- Prior art keywords
- medical image
- image
- fused
- medical
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 58
- 210000000056 organ Anatomy 0.000 claims abstract description 115
- 230000004927 fusion Effects 0.000 claims abstract description 68
- 238000013507 mapping Methods 0.000 claims abstract description 12
- 238000000034 method Methods 0.000 claims description 37
- 238000002059 diagnostic imaging Methods 0.000 claims description 32
- 230000003902 lesion Effects 0.000 claims description 24
- 230000004807 localization Effects 0.000 claims description 21
- 238000003384 imaging method Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 16
- 210000001519 tissue Anatomy 0.000 description 99
- 238000010586 diagram Methods 0.000 description 16
- 238000012636 positron electron tomography Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 12
- 210000004185 liver Anatomy 0.000 description 11
- 210000004204 blood vessel Anatomy 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000011218 segmentation Effects 0.000 description 8
- 230000036770 blood supply Effects 0.000 description 7
- 230000003993 interaction Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 239000007787 solid Substances 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000013152 interventional procedure Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 210000000038 chest Anatomy 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 230000004075 alteration Effects 0.000 description 2
- 238000002583 angiography Methods 0.000 description 2
- 210000001367 artery Anatomy 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 230000002503 metabolic effect Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000002560 therapeutic procedure Methods 0.000 description 2
- 102000006822 Agouti Signaling Protein Human genes 0.000 description 1
- 108010072151 Agouti Signaling Protein Proteins 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000012879 PET imaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 210000002767 hepatic artery Anatomy 0.000 description 1
- 230000002440 hepatic effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012831 peritoneal equilibrium test Methods 0.000 description 1
- 238000012877 positron emission topography Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 239000000700 radioactive tracer Substances 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000003437 trachea Anatomy 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a medical image fusion method, a medical image fusion system, electronic equipment and a storage medium. Firstly, acquiring a positioning image of a patient and a preoperative medical image of a detected part of the patient; acquiring a target focus area medical image of a first target organ or tissue according to the preoperative medical image; then mapping the medical image of the target focus area to a positioning image to obtain a first medical image to be fused; and then acquiring the medical image of the detected part in real time, and acquiring a second medical image to be fused of a second target organ or tissue according to the medical image of the detected part. And finally, fusing the first medical image to be fused and the second medical image to be fused in real time. The invention can better assist doctors to make decisions in interventional operations and guide treatment.
Description
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to a medical image fusion method, a system, an electronic device, and a storage medium.
Background
With the rapid development of medical image processing technology, various medical imaging devices are widely used in clinical diagnosis and medical research, and the application range includes focus detection, tumor therapy evaluation, interventional therapy and the like. Imaging techniques involved in these medical imaging devices mainly include Positron emission tomography (Positron-Emission Tomography, PET), computed tomography (Computed Tomography, CT) imaging techniques, magnetic resonance imaging (Magnetic Resonance, MR) techniques, digital subtraction (Digital Subtraction Angiography, DSA) techniques, and the like. The PET image can provide metabolic and functional information of an imaging part, and can better display metabolic information of a focus on a tracer as a functional imaging technology; MR images can provide morphological and structural information of the imaging region; digital subtraction techniques (Digital Subtraction Angiography, DSA) provide a true stereoscopic image for interventional procedures by removing unwanted tissue images and retaining only vessel images. However, the different medical imaging techniques are independent, and various imaging techniques have significant advantages and disadvantages, such as PET imaging techniques and MR imaging techniques, which are capable of obtaining clear images of organs or tissues (including lesions), but are difficult to obtain the blood supply status of the lesions; although DSA imaging techniques can well acquire blood vessel images, as one type of X-ray examination, it is inherently poor in tissue resolution and difficult to resolve lesions.
There is often a need in the art to combine two or more imaging techniques to better perform diagnostics and decision making. For example, one of them is that during interventional procedures, a physician needs to look at the lesion blood supply by comprehensive intra-operative contrast, and then make intra-operative decisions in combination with other imaging examinations such as pre-operative CT and MR. The method not only depends on the experience of doctors, but also disperses the vigor of the doctors in the operation process and even affects the operation safety. The other is preoperative medical Image Fusion (Image Fusion), namely, image data about the same target acquired by different Image technologies (multi-source channels) are processed by a computer Image processing technology, useful information in respective channels is extracted to the maximum extent, and finally fused into a high-quality medical Image.
It should be noted that the information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention aims at solving the problem that the existing medical image technology can not provide clear focus tissues and focus blood supply states at the same time in the interventional operation process, and provides a medical image fusion method, a system, electronic equipment and a storage medium so as to better assist doctors in making intra-operative decisions and guiding treatment.
In order to achieve the above purpose, the invention is realized by the following technical scheme: a medical image fusion method, comprising:
Acquiring a localization image of a patient and a preoperative medical image of a examined region of the patient; acquiring a target focus area medical image of a first target organ or tissue according to the preoperative medical image; wherein the pre-operative medical image is obtained by a first medical imaging device;
mapping the medical image of the target focus area to the positioning image to obtain a first medical image to be fused;
Acquiring an intra-operative medical image of the examined part in real time, and acquiring a second medical image to be fused of a second target organ or tissue according to the intra-operative medical image; wherein the intra-operative medical image is obtained by a second medical imaging device;
and fusing the first medical image to be fused and the second medical image to be fused in real time.
Optionally, the acquiring a medical image of a target focal region of a first target organ or tissue includes:
Dividing the acquired preoperative medical image into a first target organ or tissue region to obtain a first target organ or tissue preoperative image;
And identifying the target focus area of the first target organ or tissue preoperative image to obtain a target focus area medical image of the first target organ or tissue.
Optionally, the mapping the medical image of the target focal region to the positioning image to obtain a first medical image to be fused includes:
Determining the position information of the target focus area in the positioning image coordinate system according to the imaging parameters of the preoperative medical image and the pixel information of the target focus area medical image;
and obtaining the first medical image to be fused according to the position information of the target focus area in the positioning image coordinate system.
Optionally, the fusing the first medical image to be fused and the second medical image to be fused in real time includes:
Determining anatomical points of the localization image;
Acquiring position information of an anatomical point of the positioning image under the positioning image coordinate system and pixel information of the anatomical point of the positioning image under the medical image coordinate system; determining fusion position information parameters according to the position information of the anatomical points of the positioning image under the positioning image coordinate system and the pixel information of the anatomical points of the positioning image under the medical image coordinate system;
determining the position information of the second target organ or tissue under the positioning image coordinate system according to the fusion position information parameter and the pixel information of the second medical image to be fused under the medical image coordinate system;
And fusing the first medical image to be fused and the second medical image to be fused in real time according to the position information of the target focus area in the positioning image coordinate system and the position information of the second target organ or tissue in the positioning image coordinate system.
Optionally, the positioning image includes a two-dimensional planar image of the patient, and the fusion result of the first medical image to be fused and the second medical image to be fused includes a three-dimensional medical image;
the first medical imaging device comprises any one of an MR device, a CT device and a PET device, and the second medical imaging device comprises a DSA device.
Optionally, the medical image fusion method further comprises: and displaying the fusion result of the first medical image to be fused and the second medical image to be fused in real time according to a designated display mode.
In order to achieve the above object, the present invention also provides a medical image fusion system including:
A first medical image acquisition device configured to acquire a localization image of a patient and a preoperative medical image of a examined region of the patient; acquiring a target focus area medical image of a first target organ or tissue according to the preoperative medical image; wherein the pre-operative medical image is obtained by a first medical imaging device;
the medical image positioning unit is configured to map the medical image of the target focus area to the positioning image to obtain a first medical image to be fused;
A second medical image acquisition device configured to acquire an intra-operative medical image of the examined region in real time, and acquire a second medical image to be fused of a second target organ or tissue based on the intra-operative medical image; wherein the intra-operative medical image is obtained by a second medical imaging device;
And the medical image fusion unit is configured to fuse the first medical image to be fused and the second medical image to be fused in real time.
Optionally, the medical image fusion system further includes a display unit configured to display the fusion result of the first medical image to be fused and the second medical image to be fused in real time according to a specified display mode.
In order to achieve the above object, the present invention further provides an electronic device, which includes a processor and a memory, the memory storing a computer program, the computer program, when executed by the processor, implementing the medical image fusion method according to any one of the above.
In order to achieve the above object, the present invention also provides a readable storage medium having stored therein a computer program which, when executed by a processor, implements the medical image fusion method of any one of the above.
Compared with the prior art, the medical image fusion method, the system, the electronic equipment and the storage medium provided by the invention have the following advantages:
The medical image fusion method provided by the invention comprises the steps of firstly, acquiring a positioning image of a patient and a preoperative medical image of a detected part of the patient; acquiring a target focus area medical image of a first target organ or tissue according to the preoperative medical image; wherein the pre-operative medical image is obtained by a first medical imaging device; then mapping the medical image of the target focus area to the positioning image to obtain a first medical image to be fused; then acquiring an operation traditional Chinese medicine image of the detected part in real time, and acquiring a second medical image to be fused of a second target organ or tissue according to the operation traditional Chinese medicine image; wherein the intra-operative medical image is obtained by a second medical imaging device; and fusing the first medical image to be fused and the second medical image to be fused in real time. Therefore, the medical image of the target focus area of the first target organ or tissue can be acquired in advance according to the preoperative medical image of the detected part, so that the medical image of the target focus area of the first target organ or tissue is not required to be acquired in operation, only the medical image of the detected part is required to be acquired in real time in the operation process, and the efficiency of image fusion can be improved. Furthermore, the medical image fusion method provided by the invention fuses the first medical image to be fused and the second medical image to be fused in real time (for example, fuses the first medical image to be fused and the second medical image to be fused in real time according to the position information of the anatomical point of the positioning image), and the positioning mark does not need to be additionally added in the setting, so that a positioning component for acquiring the positioning mark does not need to be added when the first medical image to be fused of the first target organ or tissue or the second medical image to be fused of the second target organ or tissue is acquired, thereby reducing the complexity of acquiring the first medical image to be fused of the first target organ or tissue and the second medical image to be fused of the second target organ or tissue, and being easy to implement. Further, the medical image fusion method provided by the invention can fully utilize the advantages of the first medical imaging device and the second medical imaging device (MR, CT, PET or MIP (Maximum Intensity Projection, maximum intensity projection) images of the focus area obtained before operation and the real-time blood vessel DSA images in operation are fused, and the morphological and structural information of the focus area and the blood supply condition of the focus area obtained by the blood vessel DSA images provided by MR, CT, PET or MIP images can be fully combined), thereby better assisting doctors in making decisions in interventional operations and guiding treatment, and laying a solid foundation for improving the safety and the operation efficiency of the operations.
Because the medical image fusion system, the electronic device and the storage medium provided by the invention belong to the same invention conception as the medical image fusion method provided by the invention, the medical image fusion system, the electronic device and the storage medium have at least the same beneficial effects, and are not described in detail herein.
Drawings
FIG. 1 is a schematic general flow chart of a medical image fusion method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a medical image of a target focal region of a first target organ obtained by applying the medical image fusion method according to the first embodiment of the present invention;
fig. 3 is a schematic diagram of mapping a medical image of a target lesion area to a localization image by applying the medical image fusion method according to the first embodiment of the present invention;
fig. 4 is a schematic diagram of a medical image fusion method according to a first embodiment of the present invention for matching a second medical image to be fused of a second target organ according to anatomical points;
fig. 5 is a schematic diagram of a fusion result of a first medical image to be fused and a second medical image to be fused by applying the medical image fusion method according to the first embodiment of the present invention;
Fig. 6 is a block diagram of a medical image fusion system according to a second embodiment of the present invention;
fig. 7 is a schematic block diagram of an electronic device according to a third embodiment of the present invention.
Wherein, the reference numerals are as follows:
A first target organ-a, a target lesion-b, a localization image-c, a second target tissue-d, an anatomical point-e;
The system comprises a first medical image acquisition device-101, a medical image positioning unit-102, a second medical image acquisition device-103, a medical image fusion unit-104, a display unit-105 and a man-machine interaction unit-106;
processor-201, communication interface-202, memory-203, communication bus-204.
Detailed Description
The medical image fusion method, the system, the electronic device and the storage medium provided by the invention are further described in detail below with reference to the accompanying drawings. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for the purpose of facilitating and clearly aiding in the description of embodiments of the invention. For a better understanding of the invention with objects, features and advantages, refer to the drawings. It should be understood that the structures, proportions, sizes, etc. shown in the drawings are shown only in connection with the present disclosure for the understanding and reading of the present disclosure, and are not intended to limit the scope of the invention, which is defined by the appended claims, and any structural modifications, proportional changes, or dimensional adjustments, which may be made by the present disclosure, should fall within the scope of the present disclosure under the same or similar circumstances as the effects and objectives attained by the present invention. Specific design features of the invention disclosed herein, including for example, specific dimensions, orientations, positions, and configurations, will be determined in part by the specific intended application and use environment. In the embodiments described below, the same reference numerals are used in common between the drawings to denote the same parts or parts having the same functions, and the repetitive description thereof may be omitted. In this specification, like reference numerals and letters are used to designate like items, and thus once an item is defined in one drawing, no further discussion thereof is necessary in subsequent drawings. Additionally, if a method described herein comprises a series of steps, and the order of the steps presented herein is not necessarily the only order in which the steps may be performed, and some of the described steps may be omitted and/or some other steps not described herein may be added to the method.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. The singular forms "a," "an," and "the" include plural referents, the term "or" is generally used in the sense of comprising "and/or" and the term "several" is generally used in the sense of comprising "at least one," the term "at least two" is generally used in the sense of comprising "two or more," and the term "first," "second," "third," are for descriptive purposes only and are not to be construed as indicating or implying any relative importance or number of features indicated.
The invention provides a medical image fusion method, a medical image fusion system, electronic equipment and a storage medium, so as to better assist doctors to make intra-operative decisions and guide treatment, and improve the safety of interventional operations.
It should be noted that the medical image fusion method provided by the invention can be applied to computer equipment, and the computer equipment can be a server or a terminal. Further, the computer device may include a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement the medical image fusion method provided by the invention. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like. It will be appreciated by those skilled in the art that the medical image fusion method provided by the present invention is not limited to computer devices, and that a particular computer device may include more or fewer components, or may combine certain components, or may have a different arrangement of components.
Example 1
In order to achieve the foregoing ideas, the present embodiment provides a medical image fusion method, specifically, please refer to fig. 1, which schematically shows an overall flow chart of the medical image fusion method provided by one of the embodiments. As can be seen from fig. 1, the medical image fusion method provided in this embodiment includes:
S100: acquiring a localization image of a patient and a preoperative medical image of a examined region of the patient; acquiring a target focus area medical image of a first target organ or tissue according to the preoperative medical image; wherein the pre-operative medical image is obtained by a first medical imaging device;
s200: mapping the medical image of the target focus area to the positioning image to obtain a first medical image to be fused;
S300: acquiring an intra-operative medical image of the examined part in real time, and acquiring a second medical image to be fused of a second target organ or tissue according to the intra-operative medical image; wherein the intra-operative medical image is obtained by a second medical imaging device;
S400: and fusing the first medical image to be fused and the second medical image to be fused in real time.
According to the medical image fusion method provided by the embodiment, as the medical image of the target focus area of the first target organ or tissue can be obtained in advance according to the preoperative medical image of the detected part, the medical image of the target focus area of the first target organ or tissue is not required to be obtained in an operation, and only the medical image of the detected part is required to be obtained in real time in the operation process, so that the image fusion efficiency can be improved. Further, in this embodiment, the first medical image to be fused and the second medical image to be fused are fused in real time according to the position information of the anatomical point of the positioning image (for example, the first medical image to be fused and the second medical image to be fused are fused in real time according to the position information of the anatomical point of the positioning image), and no additional positioning mark is required to be added in such a setting, so that no positioning component for acquiring the positioning mark is required to be added when the first medical image to be fused of the first target organ or tissue or the second medical image to be fused of the second target organ or tissue is acquired, thereby reducing the complexity of acquiring the first medical image to be fused of the first target organ or tissue and the second medical image to be fused of the second target organ or tissue, and being easy to implement. Further, the advantages of the first medical imaging device and the second medical imaging device (MR, CT, PET or MIP images of a focus area acquired before operation and real-time blood vessel DSA images in operation are fused, morphology and structure information of the focus area provided by MR, CT, PET or MIP images and blood supply conditions of the focus area acquired by the blood vessel DSA images can be fully combined) can be fully utilized, so that a doctor can be better assisted in making decisions in interventional operations, guiding treatment, and laying a solid foundation for improving operation safety and operation efficiency.
It should be noted that, the timing of acquiring the positioning image is not limited in the present invention, and the positioning image may be obtained before acquiring the preoperative medical image of the examined part of the patient (i.e. scanning the patient), or a scanning image or an optical photographing image of the same modality or different modalities of the patient may be used. Further, the source of the preoperative medical image is not limited, and in the invention, the preoperative medical image may be acquired by an image acquisition device (first medical imaging device), for example PET, CT, MR imaging devices; the MIP image obtained by performing tomographic maximum intensity projection on the original image acquired by the image acquisition device in the view angle projection direction may also be used; the system can also be obtained from a PACS (picture archiving and communication system) system of a hospital; of course, the information can also be obtained through Internet collection; furthermore, the size of the preoperative medical image is not limited, and can be set according to specific situations.
The maximum intensity projection (Maximum Intensity Projection, MIP) is a widely used CT and MR image post-processing technique. As the fiber optic bundle passes through the original image of a section of tissue, the most dense pixels in the image are preserved and projected onto a two-dimensional plane, thereby forming a MIP reconstructed image.
Furthermore, as will be appreciated by those skilled in the art, prior medical imaging devices (e.g., PET, CT, MR) typically perform imaging by positioning the image to first determine the location of a subject (e.g., head, chest, abdomen, heart, etc.) of a patient (e.g., patient, animal, etc.) in a medical imaging system, and then moving the patient to move the subject to a desired location. Specifically, in the medical image fusion method provided by the invention, the positioning image is an image for positioning acquired by a patient before scanning. It should be noted that, in the present invention, the positioning image includes at least a portion to be detected, in some embodiments, the positioning image may include two or more portions, and preferably, the positioning image may be a whole-body image of the patient.
As one of preferred embodiments, the localization image includes a two-dimensional planar image of the patient, and as described above, either the first medical image to be fused MR, CT, PET or the MIP image; the second medical image to be fused comprises a vascular DSA image; the fusion result of the first medical image to be fused and the second medical image to be fused comprises a three-dimensional medical image; accordingly, the first medical imaging device comprises any one of an MR device, a CT device and a PET device, and the second medical imaging device comprises a DSA device.
Further, the preoperative medical image is preferably a three-dimensional medical image, e.g. the preoperative medical image is composed of a series of two-dimensional medical images with two-dimensional axes, e.g. in the direction of the head and foot. The preoperative medical image may be a magnetic resonance image sequence (three-dimensional magnetic resonance image) acquired by a magnetic resonance device, a CT image sequence (three-dimensional CT image) acquired by a CT device, or a three-dimensional medical image acquired by other medical image devices, which is not limited in this embodiment.
For easy understanding, the present invention is described with respect to the method of medical image fusion provided by the present invention, in which the abdomen of the human body is taken as the examined region, the liver is taken as the first target organ, and the hepatic artery blood vessel is taken as the second target organ, it should be understood by those skilled in the art that the present invention does not limit the first target organ or tissue, and in other embodiments, the first target organ or tissue may be other organs or tissues besides the liver, and the organs include, but are not limited to, lungs, chest, etc.; tissues include, but are not limited to, bone, muscle, and the like. The second target organ or tissue may also be other organs or tissues than hepatic arterial blood vessels, such as the trachea, etc.
As one exemplary embodiment, step S100 acquires a medical image of a target lesion area of a first target organ or tissue, comprising:
S110: dividing the acquired preoperative medical image into a first target organ or tissue region to obtain a first target organ or tissue preoperative image;
S120: and identifying the target focus area of the first target organ or tissue preoperative image to obtain a target focus area medical image of the first target organ or tissue.
According to the medical image fusion method provided by the embodiment, firstly, the acquired preoperative medical image is divided into a first target organ or tissue region to obtain a first target organ or tissue (such as liver and chest) preoperative medical image, and then the first target organ or tissue preoperative image is subjected to target focus region (such as liver focus and breast focus) identification to obtain a target focus region medical image of the first target organ or tissue. Therefore, through extracting the medical image of the target focus area, the data volume of the first medical image to be fused and the second medical image to be fused during fusion can be reduced, the calculation force is saved, the interference of organs or tissues outside the focus area can be reduced, and a doctor can observe the fusion result of the first medical image to be fused and the second medical image to be fused more conveniently during interventional operation, so that the doctor can be better assisted in making intra-operation decisions and guiding treatment.
It should be noted that, the present invention is not limited to the segmentation method of the first target organ or tissue region, and in some embodiments, the segmentation of the first target organ or tissue region may be performed on the preoperative medical image by using a pre-trained segmentation network model based on a deep learning algorithm, so as to obtain the preoperative image of the first target organ or tissue; in other embodiments, the pre-operative medical image may be segmented into a first target organ or tissue region based on a thresholding method or a region growing method to obtain a first target organ or tissue pre-operative image; of course, in other embodiments, the acquired preoperative medical image may also be manually segmented into the first target organ or tissue region (i.e., the physician may delineate the first target organ or tissue region on the acquired preoperative medical image by way of interactive delineation) to acquire the first target organ or tissue preoperative image. Similarly, in some embodiments, the pre-operative target organ image may be identified by using a pre-trained classification model based on a deep learning algorithm to obtain a target lesion area medical image of the first target organ or tissue; in other embodiments, the identification of the focal region may be performed manually on the first target organ or tissue pre-operative image to obtain a medical image of the target focal region. It should be noted that, as will be understood by those skilled in the art, if the first target organ or tissue preoperative image is obtained by segmentation based on an automatic segmentation method such as a deep learning algorithm, a threshold segmentation method or a region growing method, the physician may modify the segmentation result obtained based on the automatic segmentation method by using an interactive sketching manner, so as to obtain a final first target organ or tissue preoperative image. Similarly, if the medical image of the target focus area is also obtained based on an automatic recognition method such as a deep learning algorithm, a doctor can modify the recognition result obtained based on the automatic recognition method in an interactive sketching manner to obtain a final medical image of the target focus area of the first target organ or tissue.
Specifically, please refer to fig. 2, which schematically illustrates a schematic diagram of a medical image of a target focal region of a first target organ obtained by applying the medical image fusion method provided in the first embodiment of the present invention. As shown in fig. 2, a first target organ a preoperative image (white area in fig. 2) may be acquired by segmenting a preoperative medical image (preferably a three-dimensional medical image), and a target lesion area (black area in fig. 2) may be identified by identifying a target lesion b area of the first target organ a preoperative image to acquire a target lesion b area medical image of the first target organ a. Further, as will be appreciated by those skilled in the art, as previously described, a physician may select a target lesion for an interventional procedure (as shown in fig. 2, one of the lesions may be selected as a target lesion for an interventional procedure, such as a lesion having a larger profile). It should be noted that, as those skilled in the art will understand, the coordinate system of the first target organ or tissue preoperative image and the coordinate system of the target lesion area medical image of the first target organ or tissue are the same coordinate system.
As one exemplary embodiment, the mapping the medical image of the target lesion area to the localization image in step S200 to obtain a first medical image to be fused includes:
S210: determining the position information of the target focus area in the positioning image coordinate system according to the imaging parameters of the preoperative medical image and the pixel information of the target focus area medical image;
S220: and obtaining the first medical image to be fused according to the position information of the target focus area in the positioning image coordinate system.
So configured, according to the medical image fusion method provided in this embodiment, firstly, according to the imaging parameters of the preoperative medical image and the pixel information (three-dimensional coordinate information of each pixel point) of the medical image of the target focal area, the position information (coordinate information) of the target focal area in the positioning image coordinate system is determined, then, according to the position information of the target focal area in the positioning image coordinate system, the reconstruction of the target focal area is performed, the positioning image provides a position corresponding relation for each pixel point of the preoperative medical image and each pixel point of the operative medical image, and a foundation is laid for correctly and real-timely fusing a first medical image to be fused (such as the medical image of the target focal area) and a second medical image to be fused (such as the medical image of the artery blood vessel of the target focal).
Specifically, as described above, since the preoperative medical image is a detected part of the patient determined from the localization image, the correspondence of the positional information between each pixel in the preoperative medical image and each pixel in the localization image can be known from the imaging parameters of the preoperative medical image and the detected part corresponding to the preoperative medical image; further, according to the position information of the pixel information of the medical image of the target focus area in the preoperative medical image, the position information of the target focus area in the positioning image coordinate system is determined. Then, according to the position information of the target focus area in the positioning image coordinate system, the target focus area medical image can be mapped to the positioning image.
Specifically, please refer to fig. 3, which schematically shows a schematic diagram of mapping a medical image of a target focus area to a positioning image by applying the medical image fusion method provided by the first embodiment of the present invention, it can be seen from fig. 3 that the medical image fusion method provided by the present invention eliminates the interference of organs or tissues of a non-target focus area by mapping the medical image of the target focus area to the positioning image c, so that a doctor can more intuitively view the position information of the target focus b in the positioning image.
It should be noted that, the specific method of acquiring the medical image of the examined region in real time in step S300 and acquiring the second medical image to be fused of the second target organ or tissue according to the medical image of the examined region is not limited in the present invention, and preferably, the medical image of the examined region may be segmented to obtain the medical image of the second target organ or tissue, and the segmented medical image of the second target tissue may be used as the second medical image to be fused of the second target organ or tissue. The division may be performed by techniques known to those skilled in the art, and the present invention is not limited thereto.
Preferably, in one exemplary embodiment, step S400 performs real-time fusion of the first medical image to be fused and the second medical image to be fused, including:
s410: determining anatomical points of the localization image;
S420: acquiring position information of an anatomical point of the positioning image under the positioning image coordinate system and pixel information of the anatomical point of the positioning image under the medical image coordinate system; determining fusion position information parameters according to the position information of the anatomical points of the positioning image under the positioning image coordinate system and the pixel information of the anatomical points of the positioning image under the medical image coordinate system;
S430: determining the position information of the second target organ or tissue under the positioning image coordinate system according to the fusion position information parameter and the pixel information of the second medical image to be fused under the medical image coordinate system;
s440: and fusing the first medical image to be fused and the second medical image to be fused in real time according to the position information of the target focus area in the positioning image coordinate system and the position information of the second target organ or tissue in the positioning image coordinate system.
Therefore, according to the medical image fusion method provided by the embodiment, the first medical image to be fused and the second medical image to be fused are fused in real time according to the position information of the anatomical point (such as the liver top) of the positioning image, and the positioning mark is not required to be additionally added in the setting, so that no matter the first medical image to be fused of the first target organ or tissue is acquired or the second medical image to be fused of the second target organ or tissue is acquired, a positioning component for acquiring the positioning mark is not required to be added, and the complexity of acquiring the first medical image to be fused of the first target organ or tissue and the second medical image to be fused of the second target organ or tissue can be reduced, and the medical image fusion method is easy to implement.
Specifically, referring to fig. 4, a schematic diagram of a principle of matching a second medical image to be fused of a second target tissue d according to an anatomical point e by applying the medical image fusion method provided by the first embodiment of the present invention is schematically shown. As can be seen from fig. 4, according to the fusion method of medical images provided in this embodiment, according to the pixel information (position information) of the anatomical point of the positioning image in the positioning image coordinate system and the pixel information (position information) of the anatomical point of the positioning image in the positioning image coordinate system, the spatial mapping relationship (i.e., the fusion position information parameter) between the positioning image coordinate system and the positioning image coordinate system in the surgical medical image coordinate system can be obtained, so that the position information of the second target organ or tissue in the positioning image coordinate system can be determined according to the fusion position parameter and the pixel information of the second medical image to be fused in the positioning image coordinate system. The position information of the target focus area in the positioning image coordinate system and the position information of the second target organ or tissue in the positioning image coordinate system can be converted into a traditional Chinese medicine medical image coordinate system (or the positioning image coordinate system is directly used); and then fusing the first medical image to be fused and the second medical image to be fused which are converted into the same coordinate system, and obtaining a fusion result. In addition, in fig. 4, points 1, 2 and … … are anatomical points, and in practical applications, they can be appropriately selected according to the needs.
By comparing fig. 2,3 and 4, it is not difficult to find that, although the preoperative medical image and the operative medical image are both medical images of the same patient and the same examined region, fig. 2 and 3 can more clearly display a medical image of a target lesion b region (such as a liver lesion) of the first target organ a (such as a liver) and fig. 4 can more clearly display a medical image of the second target tissue d (such as a liver artery).
It should be noted that, for the relevant content of how to register the first medical image to be fused (the image of the target focal region of the first target organ or tissue before the operation) and the second medical image to be fused (the image of the second target organ or tissue during the operation), reference may be made to the prior art, and details thereof will not be described herein. Furthermore, it should be noted that, as those skilled in the art can understand, the first medical image to be fused and the second medical image to be fused converted into the same coordinate system may be fused in a logical or manner, so as to obtain the fused result (such as the target lesion area and the arterial vessel image of the target lesion area).
With continued reference to fig. 1, in one exemplary embodiment, the medical image fusion method further includes:
S500: and displaying the fusion result of the first medical image to be fused and the second medical image to be fused in real time according to a designated display mode.
Preferably, the fusion result of the first medical image to be fused and the second medical image to be fused may be displayed under an intra-operative medical image coordinate system. Specifically, please refer to fig. 5, which schematically illustrates a schematic diagram of a fusion result of a first medical image to be fused and a second medical image to be fused by applying the medical image fusion method provided by the first embodiment of the present invention. In specific implementation, according to the anatomical points of the positioning image, the medical image of the target focus area is firstly converted into the medical image coordinate system in the operation, and then the medical image of the target focus area converted into the medical image coordinate system in the operation is fused with the second medical image to be fused of the second target organ or tissue in the medical image coordinate system in the operation, so that a fused image in the medical image coordinate system in the operation is obtained. As shown in fig. 5, the medical image of the first target organ (gray outline area in fig. 5), the medical image of the target lesion area (white area in fig. 5), and the second target tissue area (black blood vessel area in fig. 5) are displayed, respectively.
It should be noted that, as can be appreciated by those skilled in the art, during specific fusion, the medical image of the target focal region may be displayed superimposed on the second medical image to be fused of the second target organ or tissue, and similarly, the second medical image to be fused of the second target organ or tissue may be displayed superimposed on the first medical image to be fused corresponding to the medical image of the target focal region.
Example two
The present embodiment provides a medical image fusion system, and in particular, please refer to fig. 6, which schematically illustrates a block diagram of the medical image fusion system provided in the present embodiment. As can be seen from fig. 6, the medical image fusion system provided in this embodiment includes: a first medical image acquisition device 101, a medical image localization unit 102, a second medical image acquisition device 103 and a medical image fusion unit 104. Specifically, the first medical image acquisition device 101 is configured to acquire a localization image of a patient and a preoperative medical image of a examined region of the patient; acquiring a target focus area medical image of a first target organ or tissue according to the preoperative medical image; wherein the preoperative medical image is obtained by a first medical imaging device. The medical image positioning unit 102 is configured to map the medical image of the target focus area to the positioning image, so as to obtain a first medical image to be fused. The second medical image acquisition device 103 is configured to acquire an intra-operative medical image of the examined region in real time, and acquire a second medical image to be fused of a second target organ or tissue according to the intra-operative medical image; wherein the intra-operative medical image is obtained by a second medical imaging device. The medical image fusion unit 104 is configured to fuse the first medical image to be fused and the second medical image to be fused in real time.
Preferably, the medical image fusion system provided in one exemplary embodiment further includes a display unit 105, where the display unit 105 is configured to display the fusion result of the first medical image to be fused and the second medical image to be fused in real time according to a specified display manner. So configured, in the medical image fusion system provided in this embodiment, through the display unit 105, a doctor is more convenient to observe the fusion result of the first medical image to be fused and the second medical image to be fused, so that the doctor is better assisted in making decisions in an interventional operation, guiding treatment, and laying a solid foundation for improving the safety and the operation efficiency of the operation. Preferably, the display unit 105 may also be configured to display at least one of a localization image of the patient, a preoperative medical image of a examined region of the patient, a medical image of a target lesion area of a first target organ or tissue, a first medical image to be fused, a medical image of a procedure of the examined region, and a second medical image to be fused of a second target organ or tissue, to facilitate a doctor's decision.
Therefore, the medical image fusion system provided by the embodiment can fuse the medical image (such as a PET image and an MR image) of a focus area (such as a liver focus) acquired before operation and a blood vessel DSA image acquired in real time during operation, can present the form and structure information of the focus area and the blood supply condition of the focus area to a doctor in real time, and realizes an end-to-end process, thereby better assisting the doctor in making decisions in interventional operations and guiding treatment, and laying a solid foundation for improving the safety and the operation efficiency of the operations.
Since the medical image fusion system provided in the present embodiment is similar to the basic principle of the medical image fusion method provided in the first embodiment, for avoiding redundancy, a description will not be given here, and for more details, reference is made to the related description in the first embodiment.
With continued reference to fig. 6, one exemplary embodiment of the medical image fusion system further includes a human-computer interaction unit 106. As a preferred embodiment, the man-machine interaction unit 106 is configured to determine a target focal region of the first target organ or tissue, an anatomical point (such as the liver top) of the localization image, and/or a display manner of a fusion result of the first medical image to be fused and the second medical image to be fused. Therefore, the medical image fusion system provided by the embodiment supports manual and automatic determination of the anatomical points (such as liver tops) of the positioning images and/or the first medical image to be fused, and has better flexibility. For example, through the man-machine interaction unit 106, the doctor may select a 3D fusion angle, a display color and brightness of a first target organ or tissue, a display color and brightness of a lesion area, a display color and brightness of a second target organ or tissue, and the like of the first medical image to be fused and the second medical image to be fused, so as to facilitate the doctor to observe a fusion result of the first medical image to be fused and the second medical image to be fused.
Further, the first medical image acquisition device 101, the medical image positioning unit 102, the second medical image acquisition device 103 and the medical image fusion unit 104 in the medical image fusion system provided in the present embodiment may be implemented in a software manner or may be implemented in a hardware manner, and preferably, the first medical image acquisition device 101, the medical image positioning unit 102, the second medical image acquisition device 103 and the medical image fusion unit 104 in the present embodiment are implemented in a soft-hard combination manner, for example, may include a processor, a memory and a computer program stored on the memory. When the computer program is executed, the first medical image acquisition device 101 is controlled to acquire a target focus area medical image of a first target organ or tissue, the medical image positioning unit 102 is controlled to map the target focus area medical image to the positioning image to obtain a first medical image to be fused, the two medical image acquisition devices 103 are controlled to acquire a second medical image to be fused of a second target organ or tissue in real time, and the medical image fusion unit 104 is controlled to fuse the first medical image to be fused and the second medical image to be fused in real time. In particular, the processor, the memory, the first medical image acquisition device 101, the medical image localization unit 102, the second medical image acquisition device 103 and the medical image fusion unit 104 and the display unit 105 are connectable to each other. The connection may be a wireless network connection or a wired network connection. The wired network may include, among other things, one or more combinations of use of metal cables, hybrid cables, one or more interfaces, and the like. The wireless network may include a combination of one or more of bluetooth, local Area Network (LAN), wide Area Network (WAN), near Field Communication (NFC), and the like. Further, the man-machine interaction unit 106 may be an operation button on the operation interface presented by the display unit 105 by the first medical image acquisition device 101, the medical image positioning unit 102, the second medical image acquisition device 103, and the medical image fusion unit 104, or may be other interaction devices capable of interacting with the medical image fusion system, including, but not limited to, an operation button, a touch sensing device, and/or a voice control device, which are disposed on the stand or the display unit 105.
The processor, the memory, the man-machine interaction unit 106 and the display unit 105 may be integrated in an electronic device such as a portable computer, a tablet, a mobile phone, a smart terminal device, etc. The processor may be centralized, such as a data center; or may be distributed, such as a distributed system. The processor may be local or remote. Further, in some embodiments, the processor may include one or a combination of several of a central processing unit (Central Processing Unit, CPU), application SPECIFIC INTEGRATED Circuit (ASIC), application specific instruction processor (Application Specific Instruction Set Processor, ASIP), physical processor (Physics Processing Unit, PPU), digital signal processor (Digital Processing Processor, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic device (Programmable Logic Device, PLD), processor, microprocessor, controller, microcontroller, etc.
As will be appreciated by those skilled in the art, the medical image fusion system provided in this embodiment may further comprise a patient bed assembly (not shown in the figures) carrying the first medical image acquisition device 101, the medical image positioning unit 102, the second medical image acquisition device 103 and the medical image fusion unit 104. For more details, please refer to the related description of the prior art, and the description thereof is omitted herein.
Example III
The embodiment provides an electronic device, please refer to fig. 7, which schematically illustrates a block structure of the electronic device according to an embodiment of the present invention. As shown in fig. 7, the electronic device comprises a processor 201 and a memory 203, the memory 203 having stored thereon a computer program which, when executed by the processor 201, implements the medical image fusion method described above.
Because the electronic device provided in this embodiment and the medical image fusion method provided in the first embodiment belong to the same inventive concept, the electronic device provided in this embodiment has at least all the advantages of the medical image fusion method provided in the first embodiment, and further details are omitted herein for details referring to the related descriptions in the first embodiment.
As shown in fig. 7, the electronic device further comprises a communication interface 202 and a communication bus 204, wherein the processor 201, the communication interface 202, and the memory 203 communicate with each other via the communication bus 204. The communication bus 204 may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The communication bus 204 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface 202 is used for communication between the electronic device and other devices.
The Processor 201 in the present invention may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 201 is a control center of the electronic device, and connects various parts of the entire electronic device using various interfaces and lines.
The memory 203 may be used to store the computer program, and the processor 201 implements various functions of the electronic device by running or executing the computer program stored in the memory 203 and invoking data stored in the memory 203.
The memory 203 may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Example IV
The present embodiment provides a readable storage medium having stored therein a computer program which, when executed by a processor, can implement the medical image fusion method described above.
Since the readable storage medium provided in this embodiment and the medical image fusion method provided in the first embodiment belong to the same inventive concept, the readable storage medium provided in this embodiment has at least all the advantages of the medical image fusion method provided in the first embodiment, and further details are omitted herein for details referring to the related description in the first embodiment.
The readable storage media of embodiments of the present invention may take the form of any combination of one or more computer-readable media. The readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer hard disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
In summary, compared with the prior art, the medical image fusion method, the system, the electronic device and the storage medium provided by the invention have the following advantages:
According to the medical image fusion method provided by the invention, as the medical image of the target focus area of the first target organ or tissue can be obtained in advance according to the preoperative medical image of the detected part, the medical image of the target focus area of the first target organ or tissue is not required to be obtained in an operation, and only the medical image of the detected part is required to be obtained in real time in the operation process, so that the image fusion efficiency can be improved. Furthermore, according to the medical image fusion method provided by the invention, the first medical image to be fused and the second medical image to be fused are fused in real time (for example, the first medical image to be fused and the second medical image to be fused are fused in real time according to the position information of the anatomical point of the positioning image), and the positioning mark is not required to be additionally added in the setting, so that a positioning component for acquiring the positioning mark is not required to be added when the first medical image to be fused of the first target organ or tissue or the second medical image to be fused of the second target organ or tissue is acquired, the complexity of acquiring the first medical image to be fused of the first target organ or tissue and the second medical image to be fused of the second target organ or tissue is reduced, and the medical image fusion method is easy to implement. Further, the advantages of the first medical imaging device and the second medical imaging device (MR, CT, PET or MIP images of a focus area acquired before operation and real-time blood vessel DSA images in operation are fused, morphology and structure information of the focus area provided by MR, CT, PET or MIP images and blood supply conditions of the focus area acquired by the blood vessel DSA images can be fully combined) can be fully utilized, so that a doctor can be better assisted in making decisions in interventional operations, guiding treatment, and laying a solid foundation for improving operation safety and operation efficiency.
It should be noted that the apparatus and methods disclosed in the embodiments herein may be implemented in other ways. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments herein. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments herein may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the present invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, the present invention is intended to include such modifications and alterations insofar as they come within the scope of the invention or the equivalents thereof.
Claims (10)
1. A medical image fusion method, comprising:
Acquiring a localization image of a patient and a preoperative medical image of a examined region of the patient; acquiring a target focus area medical image of a first target organ or tissue according to the preoperative medical image; wherein the pre-operative medical image is obtained by a first medical imaging device;
mapping the medical image of the target focus area to the positioning image to obtain a first medical image to be fused;
Acquiring an intra-operative medical image of the examined part in real time, and acquiring a second medical image to be fused of a second target organ or tissue according to the intra-operative medical image; wherein the intra-operative medical image is obtained by a second medical imaging device;
and fusing the first medical image to be fused and the second medical image to be fused in real time.
2. The medical image fusion method of claim 1, wherein the acquiring a medical image of a target focal region of a first target organ or tissue comprises:
Dividing the acquired preoperative medical image into a first target organ or tissue region to obtain a first target organ or tissue preoperative image;
And identifying the target focus area of the first target organ or tissue preoperative image to obtain a target focus area medical image of the first target organ or tissue.
3. The medical image fusion method according to claim 1, wherein the mapping the target lesion area medical image to the localization image to obtain a first medical image to be fused comprises:
Determining the position information of the target focus area in the positioning image coordinate system according to the imaging parameters of the preoperative medical image and the pixel information of the target focus area medical image;
and obtaining the first medical image to be fused according to the position information of the target focus area in the positioning image coordinate system.
4. A medical image fusion method according to claim 3, wherein the fusing of the first medical image to be fused and the second medical image to be fused in real time comprises:
Determining anatomical points of the localization image;
Acquiring position information of an anatomical point of the positioning image under the positioning image coordinate system and pixel information of the anatomical point of the positioning image under the medical image coordinate system; determining fusion position information parameters according to the position information of the anatomical points of the positioning image under the positioning image coordinate system and the pixel information of the anatomical points of the positioning image under the medical image coordinate system;
determining the position information of the second target organ or tissue under the positioning image coordinate system according to the fusion position information parameter and the pixel information of the second medical image to be fused under the medical image coordinate system;
And fusing the first medical image to be fused and the second medical image to be fused in real time according to the position information of the target focus area in the positioning image coordinate system and the position information of the second target organ or tissue in the positioning image coordinate system.
5. The method for medical image fusion according to any one of claims 1 to 4, wherein,
The localization image comprises a two-dimensional planar image of the patient;
the fusion result of the first medical image to be fused and the second medical image to be fused comprises a three-dimensional medical image;
the first medical imaging device comprises any one of an MR device, a CT device and a PET device, and the second medical imaging device comprises a DSA device.
6. The medical image fusion method of claim 5, further comprising: and displaying the fusion result of the first medical image to be fused and the second medical image to be fused in real time according to a designated display mode.
7. A medical image fusion system, comprising:
A first medical image acquisition device configured to acquire a localization image of a patient and a preoperative medical image of a examined region of the patient; acquiring a target focus area medical image of a first target organ or tissue according to the preoperative medical image; wherein the pre-operative medical image is obtained by a first medical imaging device;
the medical image positioning unit is configured to map the medical image of the target focus area to the positioning image to obtain a first medical image to be fused;
A second medical image acquisition device configured to acquire an intra-operative medical image of the examined region in real time, and acquire a second medical image to be fused of a second target organ or tissue based on the intra-operative medical image; wherein the intra-operative medical image is obtained by a second medical imaging device;
And the medical image fusion unit is configured to fuse the first medical image to be fused and the second medical image to be fused in real time.
8. The medical image fusion system of claim 7, further comprising a display unit configured to display fusion results of the first medical image to be fused and the second medical image to be fused in real time in a specified display manner.
9. An electronic device comprising a processor and a memory, the memory having stored thereon a computer program which, when executed by the processor, implements the medical image fusion method of any of claims 1 to 6.
10. A readable storage medium, characterized in that the readable storage medium has stored therein a computer program which, when executed by a processor, implements the medical image fusion method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211400094.2A CN118014923A (en) | 2022-11-09 | 2022-11-09 | Medical image fusion method, system, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211400094.2A CN118014923A (en) | 2022-11-09 | 2022-11-09 | Medical image fusion method, system, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118014923A true CN118014923A (en) | 2024-05-10 |
Family
ID=90958543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211400094.2A Pending CN118014923A (en) | 2022-11-09 | 2022-11-09 | Medical image fusion method, system, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118014923A (en) |
-
2022
- 2022-11-09 CN CN202211400094.2A patent/CN118014923A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12033741B2 (en) | Aligning image data of a patient with actual views of the patient using an optical code affixed to the patient | |
El-Gamal et al. | Current trends in medical image registration and fusion | |
US8903147B2 (en) | Medical report generation apparatus, method and program | |
EP1685535B1 (en) | Device and method for combining two images | |
CN106975163B (en) | Automated anatomy delineation for image-guided therapy planning | |
Liu et al. | An augmented reality system for image guidance of transcatheter procedures for structural heart disease | |
JP6220310B2 (en) | Medical image information system, medical image information processing method, and program | |
US20170084036A1 (en) | Registration of video camera with medical imaging | |
CN107886508B (en) | Differential subtraction method and medical image processing method and system | |
US11672505B2 (en) | Correcting probe induced deformation in an ultrasound fusing imaging system | |
US20070270689A1 (en) | Respiratory gated image fusion of computed tomography 3D images and live fluoroscopy images | |
JP2006314778A (en) | Medical image processing apparatus and medical image processing method | |
US11327773B2 (en) | Anatomy-aware adaptation of graphical user interface | |
CN106709920B (en) | Blood vessel extraction method and device | |
CN114943714A (en) | Medical image processing system, medical image processing apparatus, electronic device, and storage medium | |
JP2021520236A (en) | Frameless 2D / 3D image registration based on anatomy | |
CN110352447A (en) | Benchmaring in clinical image | |
CN105761217A (en) | Image reconstruction method and device | |
CN116649994A (en) | Intelligent fusion method, device, equipment and medium for CTA and DSA images | |
CN106446515A (en) | Three-dimensional medical image display method and apparatus | |
Guerroudji et al. | Automatic brain tumor segmentation, and 3d reconstruction and visualization using augmented reality | |
CN109350059B (en) | Combined steering engine and landmark engine for elbow auto-alignment | |
CN117197346A (en) | Three-dimensional ultrasonic imaging processing method, system, electronic device and readable storage medium | |
US20220000442A1 (en) | Image orientation setting apparatus, image orientation setting method, and image orientation setting program | |
CN118014923A (en) | Medical image fusion method, system, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |