CN117994620A - Fusion method, device and equipment of CT image and MR image and storage medium - Google Patents

Fusion method, device and equipment of CT image and MR image and storage medium Download PDF

Info

Publication number
CN117994620A
CN117994620A CN202410173104.6A CN202410173104A CN117994620A CN 117994620 A CN117994620 A CN 117994620A CN 202410173104 A CN202410173104 A CN 202410173104A CN 117994620 A CN117994620 A CN 117994620A
Authority
CN
China
Prior art keywords
focus
target
image
lesion
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410173104.6A
Other languages
Chinese (zh)
Inventor
路志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianren Healthcare Big Data Technology Co Ltd
Original Assignee
Lianren Healthcare Big Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianren Healthcare Big Data Technology Co Ltd filed Critical Lianren Healthcare Big Data Technology Co Ltd
Priority to CN202410173104.6A priority Critical patent/CN117994620A/en
Publication of CN117994620A publication Critical patent/CN117994620A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the invention discloses a fusion method, a device, equipment and a storage medium of CT images and MR images, which comprise the following steps: acquiring at least one CT image and at least one MR image of a target object to be analyzed; wherein the CT image to be analyzed is a target scan CT image about a target focus; acquiring target focus attribute information about a target focus in all CT images to be analyzed, and determining an associated focus corresponding to the target focus from the MR images based on the target focus attribute information; and respectively extracting focus features of the target focus and the associated focus, and fusing the extracted focus features to obtain target three-dimensional focus information. The technical scheme of the embodiment of the invention solves the problem of too low information processing speed in the prior art when the common CT image and the MR image are fused, can perform characteristic fusion on the focus characteristic of a single target focus in the target CT image and the focus corresponding to the MR image, improves the image fusion processing speed, and further improves the focus diagnosis efficiency.

Description

Fusion method, device and equipment of CT image and MR image and storage medium
Technical Field
The embodiment of the invention relates to the technical field of medical image data processing, in particular to a fusion method, a fusion device, fusion equipment and a storage medium of a CT image and an MR image.
Background
In medical imaging, CT and MR are two different imaging techniques that provide high resolution anatomical images and functional information of biological tissue, respectively. The accurate registration of these two images is of great importance for the in-depth knowledge of the pathophysiological condition of the patient.
In clinical diagnosis, a patient typically performs fusion contrast analysis of CT and MR images of a body part. However, the conventional CT has a large scan field, and is generally used for performing a wide-range scan, and the CT image has too much feature information about a lesion, and thus, when the CT image is fused with the MR image, the information processing speed tends to be too slow.
Disclosure of Invention
The embodiment of the invention provides a fusion method, a device, equipment and a storage medium of a CT image and an MR image, which can perform feature fusion on focus features of a single target focus in a target scanning CT image and corresponding focuses in the MR image, improve the image fusion processing speed and further improve the focus diagnosis efficiency.
In a first aspect, an embodiment of the present invention provides a method for fusing a CT image and an MR image, including:
Acquiring at least one CT image to be analyzed and at least one MR image comprising a target site; the CT image to be analyzed is a target scanning CT image comprising a target focus;
Determining an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information;
and carrying out fusion treatment on the extracted focus characteristics of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus.
In a second aspect, an embodiment of the present invention provides a fusion apparatus of a CT image and an MR image, the apparatus including:
An image acquisition module for acquiring at least one CT image to be analyzed and at least one MR image including a target site; the CT image to be analyzed is a target scanning CT image comprising a target focus;
An associated lesion determination module configured to determine an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information;
and the focus feature fusion module is used for carrying out fusion treatment on the extracted focus features of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus.
In a third aspect, an embodiment of the present invention provides a computer apparatus, including:
One or more processors;
A memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of fusion of CT images and MR images of any of the embodiments.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for fusing CT images and MR images according to any of the embodiments.
According to the technical scheme provided by the embodiment of the invention, at least one CT image to be analyzed and at least one MR image which comprise a target part are acquired; the CT image to be analyzed is a target scanning CT image comprising a target focus; determining an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information; and carrying out fusion treatment on the extracted focus characteristics of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus. The technical scheme of the embodiment of the invention solves the problem of too low information processing speed in the prior art when the common CT image and the MR image are fused, can perform characteristic fusion on the focus characteristic of a single target focus in the target CT image and the focus corresponding to the MR image, improves the image fusion processing speed, and further improves the focus diagnosis efficiency.
Drawings
FIG. 1 is a flow chart of a method for fusing CT images and MR images according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for fusing CT images and MR images provided by an embodiment of the present invention;
FIG. 3 is a workflow diagram for fusion of CT images and MR images provided by an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a device for fusing CT images and MR images according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a flowchart of a method for fusing a CT image and an MR image according to an embodiment of the present invention, where the embodiment of the present invention is applicable to a scene in which focal features in a target scan CT image and an MR image are fused, the method may be performed by a fusion device for a CT image and an MR image, and the device may be implemented by software and/or hardware.
As shown in fig. 1, the fusion method of the CT image and the MR image includes the steps of:
s110, at least one CT image to be analyzed and at least one MR image including the target site are acquired.
Wherein the target site may be a body site for performing lesion analysis. By way of example, the target site may be a body site such as the chest, pelvis, etc. The CT image to be analyzed may be a CT image required for lesion analysis. Specifically, the CT image to be analyzed includes a target scan CT image of the target lesion. The CT image to be analyzed may be, for example, a set of target scan CT images taken for a target lesion.
The target scanning CT adopts 1024X 1024 matrix, generally scans information related to focus, and can improve effective pixels in unit area to about 4 times of conventional CT, thereby displaying more focus information. The target scanning utilizes the small-field, high-matrix and small-pitch technology to obviously improve the spatial resolution of the image, and is helpful for displaying more imaging characteristics of the pulmonary nodules.
Target scanning differs from conventional CT in terms of scanning mode, purpose, function, etc. The specific analysis is as follows:
1. The purposes are different: target scanning mainly provides basis for qualitative diagnosis of lesions. Because it can provide higher spatial resolution, it is suitable for observation and analysis of organs or lesions with small tissue structures. The common CT is a routine examination, which is mainly used to know the layer thickness and layer distance of different parts or organs so as to obtain more comprehensive anatomical structure information.
2. The scanning modes are different: the scan field of the target scan is small and is typically used to perform a small range of scans. The scanning mode can provide higher spatial resolution and can clearly display organs or lesions with small tissue structures. Whereas a scan field of a general CT is large, it is generally used for performing a wide-range scan to obtain more comprehensive image information.
3. The effects are different: target scanning can generally find lesions in a thinner layer thickness range and zoom in. Therefore, the focus can be observed more comprehensively, intuitively and in multiple directions, and a doctor is helped to draw more accurate conclusions. Whereas the scanning faults of the common CT are thicker and do not provide such comprehensive observations. Thus, target scanning may be more advantageous in the qualitative diagnosis of disease.
In principle, the target scanning CT uses X rays, the MR uses magnetic fields and radio waves, the target scanning CT is mainly used for imaging bones and lungs, the MR is suitable for soft tissue structures of all parts of the whole body, the scanning speed of the target scanning CT is relatively high, the scanning speed of the MR is relatively low, the contrast ratio of the CT to the bone structures is high, and the contrast ratio of the MR to the soft tissue structures is high. When diagnosis is clinically carried out, the target scan CT and the MR are registered, so that contrast analysis can be better carried out, the working efficiency of doctors is improved, and the follow-up quality is improved.
S120, determining the associated focus of the target focus from the at least one MR image based on the target focus attribute information about the target focus in the at least one CT image to be analyzed.
Wherein, the target lesion attribute information may be characteristic information about the target lesion. Illustratively, the target lesion attribute information includes lesion size information and lesion density information. The lesion size information may be information about a size of a lesion. For example, lesion size information includes information on the length, width, area, shape, etc. of the lesion. The lesion density information may be information about a lesion density. Specifically, focus attribute information about the target focus in all CT images to be analyzed can be extracted, so that the target focus attribute information is obtained.
Further, the associated lesion may be a lesion in the MR image corresponding to the target lesion. It is understood that the target lesion and the associated lesion refer to the same lesion, and the associated lesion is the name for the target lesion in the MR image. Specifically, all focuses in the MR image can be identified, focus attribute information of each focus in the MR image is extracted to obtain focus attribute information to be analyzed, similarity between focus attribute information to be analyzed and target focus attribute information of each focus in the MR image is calculated, and finally the focus most similar to the target focus attribute information in the MR image is used as an associated focus.
S130, fusing and processing the extracted focus features of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus.
Wherein, the target three-dimensional lesion information may be three-dimensional feature information about the target lesion. Specifically, the extracted focus features of the target focus and the related focus can be fused and processed to obtain three-dimensional focus feature data, and then the three-dimensional focus feature data is used as target three-dimensional focus information. Wherein the three-dimensional lesion feature data comprises: three-dimensional shape of focus, volume of focus, density of focus, smoothness of focus surface, etc.
According to the technical scheme provided by the embodiment of the invention, at least one CT image to be analyzed and at least one MR image which comprise a target part are acquired; the CT image to be analyzed is a target scanning CT image comprising a target focus; determining an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information; and carrying out fusion treatment on the extracted focus features of the target focus and the related focus to obtain target three-dimensional focus information of the target focus. The technical scheme of the embodiment of the invention solves the problem of too low information processing speed in the prior art when the common CT image and the MR image are fused, can perform characteristic fusion on the focus characteristic of a single target focus in the target CT image and the focus corresponding to the MR image, improves the image fusion processing speed, and further improves the focus diagnosis efficiency.
Fig. 2 is a flowchart of another method for fusing a CT image and an MR image according to an embodiment of the present invention, where the embodiment of the present invention is applicable to a scene in which focal features in a target scan CT image and an MR image are fused, and this embodiment further illustrates how to determine, based on target focal attribute information about a target focal in at least one CT image to be analyzed, an associated focal of the target focal from at least one MR image, based on the above embodiment; and how to fusion process the extracted focus features of the target focus and the related focus to obtain the target three-dimensional focus information of the target focus. The apparatus may be implemented in software and/or hardware, and integrated into a computer device having application development functionality.
As shown in fig. 2, the fusion method of the CT image and the MR image includes the steps of:
S210, at least one CT image to be analyzed and at least one MR image including a target site are acquired.
Wherein the target site may be a body site for performing lesion analysis. By way of example, the target site may be a body site such as the chest, pelvis, etc. The CT image to be analyzed may be a CT image required for lesion analysis. Specifically, the CT image to be analyzed includes a target scan CT image of the target lesion. The CT image to be analyzed may be, for example, a set of target scan CT images taken for a target lesion.
Optionally, acquiring at least one CT image to be analyzed including the target site includes: acquiring at least one CT image set to be analyzed comprising a target part, and generating a CT image control corresponding to the CT image set to be analyzed in a preset interactive interface; the CT image set to be analyzed is a target scanning CT image set related to a specified focus area; and determining a target image set from the CT image set to be analyzed in response to the selection operation of the CT image control, and taking the CT images in the target image set as CT images to be analyzed.
The set of CT images to be analyzed may be a set of target scan CT images taken of a lesion in a target site. Specifically, a target scan CT image may be taken for each focal region of the target portion, so as to obtain a CT image set to be analyzed corresponding to the focal region. I.e. each focus area corresponds to a CT image set to be analyzed.
Further, the CT image control can be an interaction space corresponding to the analysis CT image set. The user can select the CT image set to be analyzed by performing interactive operation with the CT image control. Further, the target image set may be a selected one of the CT image sets to be analyzed. Specifically, a selection operation for the CT image control may be obtained, a CT image set to be analyzed corresponding to the selected CT image control is taken as a target image set, and finally CT images in the target image set are taken as CT images to be analyzed.
S220, acquiring focus attribute information of the target focus to obtain target focus attribute information corresponding to the target focus.
The lesion attribute information may be characteristic information about a lesion, among others. Illustratively, the lesion attribute information includes lesion size information and lesion density information. The lesion size information may be information about a size of a lesion. For example, lesion size information includes information on the length, width, area, shape, etc. of the lesion. The lesion density information may be information about a lesion density. The target lesion attribute information may be characteristic information about the target lesion. In particular. The target lesion attribute information includes lesion size information and lesion density information of the target lesion. For example, focus attribute information about the target focus in all CT images to be analyzed may be extracted, so as to obtain the target focus attribute information.
S230, identifying all focuses in the MR image, determining at least one focus to be analyzed, and respectively obtaining focus attribute information of each focus to be analyzed to obtain focus attribute information to be analyzed corresponding to the focus to be analyzed.
Wherein the lesion to be analyzed may be a lesion present in an MR image. Specifically, all lesions in the MR image may be identified and each lesion in the MR image may be considered as a lesion to be analyzed. Specifically, the image processing method for identifying a lesion in an MR image is not limited herein.
Further, the lesion attribute information to be analyzed may be lesion attribute information about the lesion to be analyzed. Correspondingly, the attribute information of the focus to be analyzed can also comprise focus size information and focus density information of the focus to be analyzed. Specifically, after determining the focus to be analyzed in the MR image, focus size information and focus density information of the focus to be analyzed can be extracted for each focus to be analyzed, so as to obtain focus attribute information to be analyzed corresponding to the focus to be analyzed.
S240, determining the information similarity of the attribute information of the focus to be analyzed and the attribute information of the target focus, and determining the associated focus from the focus to be analyzed according to the information similarity.
The information similarity may be a parameter indicating a degree of similarity between the lesion to be analyzed and the target lesion. Optionally, the attribute information of the focus to be analyzed and the attribute information of the target focus can be respectively converted into corresponding vector data to obtain the attribute vector of the focus to be analyzed and the attribute vector of the target focus, further, the euclidean distance between the attribute vector of the focus to be analyzed and the attribute vector of the target focus is calculated, and then the calculated euclidean distance is used as the information similarity.
Further, the associated lesion may be a lesion in the MR image corresponding to the target lesion. It is understood that the target lesion and the associated lesion refer to the same lesion, and the associated lesion is the name for the target lesion in the MR image. Specifically, in determining the information similarity between each focus to be analyzed and the target focus, the focus to be analyzed with the maximum information similarity may be used as the associated focus. By determining the information similarity of the attribute information of the focus to be analyzed and the attribute information of the target focus, the focus in the target scanning CT image and the focus in the MR image can be registered based on the information similarity, so that the characteristic analysis of the registered focus can be conveniently carried out later, and the focus analysis speed is improved.
The traditional CT and MR registration is mainly carried out on the whole set of CT and MR with different time sequences, the images of the whole set of CT and MR are larger, the processing speed is low, and the effect is poor. The embodiment of the invention mainly solves the problem of registration of the whole set of MR and focus local CT, and has relatively innovations in technical realization because the local focus images are less in whole and quick in processing speed and focus information registration is more accurate. Meanwhile, in the aspect of clinical application, when the clinical focus differential diagnosis is carried out, the traditional method can only carry out manual matching by manually turning the image by a doctor, the consumed time is long, and the positioning of the focus on the same layer is not very accurate.
S250, extracting focus features of the target focus in the CT image to be analyzed to obtain first focus features.
Wherein the first lesion characteristic may be a lesion characteristic relating to the target lesion. Illustratively, the first lesion characterization includes size information, density information, etc. of the target lesion. Specifically, focus features about the target focus in each CT image to be analyzed may be extracted separately, so as to obtain a first focus feature.
And S260, extracting focus features related to the associated focus in the MR image to obtain second focus features.
Wherein the second lesion characteristic may be a lesion characteristic relating to the associated lesion. Illustratively, the second lesion characterization includes information associated with lesion size information, density information, and lesion smoothness. Specifically, the lesion features related to the associated lesion in each MR image may be extracted separately, thereby obtaining the second lesion feature.
S270, determining target three-dimensional focus features according to the first focus features and the second focus features, constructing a target three-dimensional focus model according to the target three-dimensional focus features, and taking the target three-dimensional focus features and the target three-dimensional focus model as the target three-dimensional focus information.
Wherein the target three-dimensional lesion feature may be three-dimensional lesion feature data about the target lesion. Exemplary, three-dimensional lesion characterization data includes: three-dimensional shape of focus, volume of focus, density of focus, smoothness of focus surface, etc. The target three-dimensional lesion information may be three-dimensional feature information about the target lesion. Specifically, a target three-dimensional focus model can be constructed according to the target three-dimensional focus features, and the target three-dimensional focus features and the target three-dimensional focus model are used as target three-dimensional focus information. Wherein the target three-dimensional lesion model may be a three-dimensional stereoscopic model with respect to the target lesion. The first focus feature and the second focus feature are fused to obtain target three-dimensional focus information, so that the target focus can be conveniently analyzed from a three-dimensional perspective based on the target three-dimensional focus information, and focus analysis accuracy is improved.
Optionally, determining the target three-dimensional focus feature according to the first focus feature and the second focus feature includes inputting the first focus feature and the second focus feature into a pre-trained target focus feature fusion model to obtain the target three-dimensional focus feature; wherein the target three-dimensional lesion feature comprises: at least one of lesion three-dimensional shape, lesion volume, lesion density, lesion surface smoothness.
The target lesion feature fusion model may be a model for fusing lesion features in a CT image and an MR image. The target three-dimensional focus feature can be obtained by inputting the first focus feature and the second focus feature into a pre-trained target focus feature fusion model.
Specifically, the target lesion feature fusion model may be obtained through pre-training. For example, a Convolutional Neural Network (CNN) may be used as an initial model for CT and MR image fusion, and then the initial model is trained to obtain a target lesion feature fusion model, where specific training steps are as follows: 1. data preparation: a labeled CT and MR image pair is collected and prepared, ensuring that the images are aligned and correspond to each other. This may involve preprocessing steps such as image registration. 2. And (3) network architecture design: deep neural network structures are designed for fusion. In general, this may be an end-to-end structure, accepting CT and MR images as inputs, outputting fused images. 3. Input data processing: the CT and MR images are pre-processed, such as normalized, denoised or other enhancement operations, to ensure stability and convergence of the network training. 4. Training a network: the designed network structure is trained using the prepared image pair. The goal of training is to minimize the difference between the fused image and the ground truth, typically using a Mean Square Error (MSE) or other suitable loss function. 5. And (3) verification and adjustment: the verification set is used for evaluating the network performance, and super-parameter adjustment or network structure adjustment is carried out when needed so as to improve the performance. 6. Testing and application: the trained network is tested on unseen data and applied in the actual scenario. In addition, the output of the deep learning model can be interpreted to know key characteristics of the network in the decision process. Ensuring that the designed deep learning model meets relevant regulations and ethical guidelines for medical image processing.
In the above steps, some details may vary depending on the particular problem and data set. Furthermore, the use of pre-trained models, transfer learning, etc. techniques may be considered to improve model performance, especially when available training data is limited. It is noted that the application of deep learning methods in medical image processing requires caution, as problems with the interpretability, robustness and safety of the model may have a significant impact on clinical applications.
Optionally, after the target three-dimensional focus information is obtained, the target three-dimensional focus information can be input into a pre-trained target focus error analysis model to obtain a focus error analysis result; and adjusting model parameters in the target focus characteristic fusion model according to focus error analysis results.
The target lesion error analysis model may be a model for analyzing an error of target three-dimensional lesion information. Optionally, the target focus error analysis model may be connected to the target focus feature fusion model, and the target focus error analysis model may perform error analysis on the target three-dimensional focus information output by the target focus feature fusion model, determine an error relationship between the target three-dimensional focus information and the first focus feature and the second focus feature, so as to obtain a focus error analysis result, and further, may adjust model parameters in the target focus feature fusion model according to the focus error analysis result. By adjusting the model parameters of the target focus error analysis model based on focus fusion errors, a self-adaptive optimization strategy can be adopted, and model parameters are continuously optimized through an iteration process, so that model robustness is improved.
Optionally, after the target three-dimensional focus information is obtained, a corresponding relation between the target three-dimensional focus information and the target focus in the CT image to be analyzed can be constructed, so as to obtain first corresponding relation information; constructing a corresponding relation between the target three-dimensional focus information and the associated focus in the MR image to obtain second corresponding relation information; and displaying the target three-dimensional focus information, the first corresponding relation information and the second corresponding relation information in a preset display interactive interface.
The first correspondence information may be correspondence information between the target three-dimensional lesion information and a target lesion in the CT image to be analyzed. Specifically, since the CT image to be analyzed includes only one focus of the target focus, a correspondence relationship can be directly established between the target three-dimensional focus information and the focus area in the CT image to be analyzed, so as to obtain the first correspondence relationship information. For example, the correspondence may be a correspondence between a three-dimensional coordinate point in the target three-dimensional lesion model and a pixel point in the CT image to be analyzed. For example, a three-dimensional coordinate point of the target focus feature fusion model and correspondence information of a certain pixel point in a certain CT image to be analyzed.
Accordingly, the second correspondence information may be correspondence information between the target three-dimensional lesion information and the associated lesion in the MR image. Specifically, since more than one focus is associated in the MR image, but the target three-dimensional focus information is a three-dimensional feature associated with the focus, a corresponding relationship can be established only between the target three-dimensional focus information and the focus associated in the MR image, so as to obtain second corresponding relationship information, and no association relationship exists between the target three-dimensional focus information and other focuses except the focus associated in the MR image. For example, the correspondence may be between three-dimensional coordinate points in the target three-dimensional lesion model and pixel points in the MR image. For example, a three-dimensional coordinate point of the lesion feature fusion model is associated with correspondence information of a certain pixel point in a certain MR image.
Illustratively, fig. 3 is a workflow diagram for fusion of CT images and MR images provided by an embodiment of the present invention. Specifically, as shown in fig. 3, the workflow for fusing the CT image and the MR image includes the following steps:
1. Pretreatment: preprocessing, including denoising, gray scale normalization, image normalization, etc., is performed on the target scan CT and MR images to ensure that the input images have similar feature ranges.
2. Feature extraction: focal features are extracted from the CT and MR images and segmented to construct three-dimensional focal information.
3. Multi-level registration: the image pyramid is divided into a plurality of layers by using a multi-scale strategy, and registration is carried out layer by layer from coarse to fine. For each layer, a registration algorithm is adopted to perform preliminary registration, and then the registration result is used for initializing the next layer.
4. Multimodal fusion: and carrying out multi-modal fusion on the three-dimensional focus information extracted from the CT and the MR in a stereoscopic mode so as to establish a corresponding relation between multi-modal images.
5. Optimizing: and by adopting a self-adaptive optimization strategy, the registration parameters are continuously optimized through an iterative process, so that the algorithm can work robustly under different conditions.
6. Evaluation and verification: and quantitatively and qualitatively evaluating the registration result, and verifying by adopting a root mean square error index.
According to the technical scheme provided by the embodiment of the invention, at least one CT image to be analyzed and at least one MR image which comprise a target part are acquired; identifying all lesions in the MR image, determining at least one lesion to be analyzed; respectively acquiring focus attribute information of each focus to be analyzed to obtain focus attribute information to be analyzed; determining information similarity of focus attribute information to be analyzed and target focus attribute information, and determining an associated focus from the focus to be analyzed according to the information similarity; extracting focus features about a target focus in a CT image to be analyzed to obtain first focus features; extracting focus features related to the associated focus in the MR image to obtain second focus features; determining a target three-dimensional focus feature according to the first focus feature and the second focus feature, constructing a target three-dimensional focus model according to the target three-dimensional focus feature, and taking the target three-dimensional focus feature and the target three-dimensional focus model as target three-dimensional focus information. The technical scheme of the embodiment of the invention solves the problem of too low information processing speed in the prior art when the common CT image and the MR image are fused, can perform characteristic fusion on the focus characteristic of a single target focus in the target CT image and the focus corresponding to the MR image, improves the image fusion processing speed, and further improves the focus diagnosis efficiency.
Fig. 4 is a schematic structural diagram of a device for fusing a CT image and an MR image according to an embodiment of the present invention, where the embodiment of the present invention is applicable to a scenario in which focal features in a target scan CT image and an MR image are fused, and the device may be implemented by software and/or hardware, and integrated into a computer device with an application development function.
As shown in fig. 4, the fusion apparatus of a CT image and an MR image includes: an image acquisition module 310, an associated lesion determination module 320, and a lesion feature fusion module 330.
Wherein the image acquisition module 310 is configured to acquire at least one CT image to be analyzed and at least one MR image including a target region; the CT image to be analyzed is a target scanning CT image comprising a target focus; an associated lesion determination module 320, configured to determine an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information; and a focus feature fusion module 330, configured to fuse the extracted focus features of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus.
According to the technical scheme provided by the embodiment of the invention, at least one CT image to be analyzed and at least one MR image which comprise a target part are acquired; the CT image to be analyzed is a target scanning CT image comprising a target focus; determining an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information; and carrying out fusion treatment on the extracted focus characteristics of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus. The technical scheme of the embodiment of the invention solves the problem of too low information processing speed in the prior art when the common CT image and the MR image are fused, can perform characteristic fusion on the focus characteristic of a single target focus in the target CT image and the focus corresponding to the MR image, improves the image fusion processing speed, and further improves the focus diagnosis efficiency.
In an alternative embodiment, the association focus determining module 320 is specifically configured to: identifying all lesions in the MR image, determining at least one lesion to be analyzed; respectively acquiring focus attribute information of each focus to be analyzed to obtain focus attribute information to be analyzed; and determining the information similarity of the attribute information of the focus to be analyzed and the attribute information of the target focus, and determining the associated focus from the focus to be analyzed according to the information similarity.
In an alternative embodiment, the focal feature fusion module 330 is specifically configured to: extracting focus features of the CT image to be analyzed on the target focus to obtain first focus features; extracting focus features of the associated focus in the MR image to obtain second focus features; determining a target three-dimensional focus feature according to the first focus feature and the second focus feature, constructing a target three-dimensional focus model according to the target three-dimensional focus feature, and taking the target three-dimensional focus feature and the target three-dimensional focus model as the target three-dimensional focus information.
In an alternative embodiment, the lesion feature fusion module 330 includes: a three-dimensional lesion feature determination unit configured to: inputting the first focus feature and the second focus feature into a target focus feature fusion model trained in advance to obtain the target three-dimensional focus feature; wherein the target three-dimensional lesion feature comprises: at least one of lesion three-dimensional shape, lesion volume, lesion density, lesion surface smoothness.
In an alternative embodiment, the image acquisition module 310 includes: a CT image acquisition unit for: acquiring at least one CT image set to be analyzed comprising a target part, and generating a CT image control corresponding to the CT image set to be analyzed in a preset interactive interface; the CT image set to be analyzed is a target scanning CT image set related to a specified focus area; and determining a target image set from the CT image set to be analyzed in response to the selection operation of the CT image control, and taking CT images in the target image set as the CT images to be analyzed.
In an alternative embodiment, the apparatus for fusing CT images and MR images further comprises: a registration relationship display module, configured to: constructing a corresponding relation between the target three-dimensional focus information and the target focus in the CT image to be analyzed to obtain first corresponding relation information; constructing a corresponding relation between the target three-dimensional focus information and the associated focus in the MR image to obtain second corresponding relation information; and displaying the target three-dimensional focus information, the first corresponding relation information and the second corresponding relation information in a preset display interactive interface.
In an alternative embodiment, the apparatus for fusing CT images and MR images further comprises: a focus error feedback module for: inputting the target three-dimensional focus information into a pre-trained target focus error analysis model to obtain a focus error analysis result; and adjusting model parameters in the target focus characteristic fusion model according to the focus error analysis result.
The fusion device of the CT image and the MR image provided by the embodiment of the invention can execute the fusion method of the CT image and the MR image provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention. Fig. 5 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in fig. 5 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention. The computer device 12 may be any terminal device with computing power and may be configured in a fusion device of CT images and MR images.
As shown in FIG. 5, the computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 may be one or more of several types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The system memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 20. As shown in fig. 5, the network adapter 20 communicates with other modules of the computer device 12 via the bus 18. It should be appreciated that although not shown in fig. 5, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement a fusion method of a CT image and an MR image provided by the embodiment of the present invention, the method including:
Acquiring at least one CT image to be analyzed and at least one MR image comprising a target site; the CT image to be analyzed is a target scanning CT image comprising a target focus;
Determining an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information;
and carrying out fusion treatment on the extracted focus characteristics of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus.
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a fusion method of a CT image and an MR image as provided by any embodiment of the present invention, comprising:
Acquiring at least one CT image to be analyzed and at least one MR image comprising a target site; the CT image to be analyzed is a target scanning CT image comprising a target focus;
Determining an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information;
and carrying out fusion treatment on the extracted focus characteristics of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium may be, for example, but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It will be appreciated by those of ordinary skill in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed over a network of computing devices, or they may alternatively be implemented in program code executable by a computer device, such that they are stored in a memory device and executed by the computing device, or they may be separately fabricated as individual integrated circuit modules, or multiple modules or steps within them may be fabricated as a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. A method of fusing a CT image and an MR image, comprising:
Acquiring at least one CT image to be analyzed and at least one MR image comprising a target site; the CT image to be analyzed is a target scanning CT image comprising a target focus;
Determining an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information;
and carrying out fusion treatment on the extracted focus characteristics of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus.
2. The method of claim 1, wherein the determining the associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed comprises:
obtaining focus attribute information of the target focus to obtain target focus attribute information corresponding to the target focus,
Identifying all focuses in the MR image, determining at least one focus to be analyzed, and respectively acquiring focus attribute information of each focus to be analyzed to obtain focus attribute information to be analyzed corresponding to the focus to be analyzed;
and determining the information similarity of the attribute information of the focus to be analyzed and the attribute information of the target focus, and determining the associated focus from the focus to be analyzed according to the information similarity.
3. The method according to claim 1, wherein the fusing the extracted lesion characteristics of the target lesion and the associated lesion to obtain target three-dimensional lesion information of the target lesion comprises:
Extracting focus features of the CT image to be analyzed on the target focus to obtain first focus features;
extracting focus features of the associated focus in the MR image to obtain second focus features;
determining a target three-dimensional focus feature according to the first focus feature and the second focus feature, constructing a target three-dimensional focus model according to the target three-dimensional focus feature, and taking the target three-dimensional focus feature and the target three-dimensional focus model as the target three-dimensional focus information.
4. The method of claim 3, wherein said determining a target three-dimensional lesion feature from said first lesion feature and said second lesion feature comprises:
Inputting the first focus feature and the second focus feature into a target focus feature fusion model trained in advance to obtain the target three-dimensional focus feature;
wherein the target three-dimensional lesion feature comprises: at least one of lesion three-dimensional shape, lesion volume, lesion density, lesion surface smoothness.
5. The method of claim 1, wherein the acquiring at least one CT image to be analyzed including a target site comprises:
Acquiring at least one CT image set to be analyzed comprising a target part, and generating a CT image control corresponding to the CT image set to be analyzed in a preset interactive interface; the CT image set to be analyzed is a target scanning CT image set related to a specified focus area;
And determining a target image set from the CT image set to be analyzed in response to the selection operation of the CT image control, and taking CT images in the target image set as the CT images to be analyzed.
6. The method of claim 1, further comprising, after obtaining the target three-dimensional lesion information:
constructing a corresponding relation between the target three-dimensional focus information and the target focus in the CT image to be analyzed to obtain first corresponding relation information;
Constructing a corresponding relation between the target three-dimensional focus information and the associated focus in the MR image to obtain second corresponding relation information;
And displaying the target three-dimensional focus information, the first corresponding relation information and the second corresponding relation information in a preset display interactive interface.
7. The method of claim 4, further comprising, after obtaining the target three-dimensional lesion information:
Inputting the target three-dimensional focus information into a pre-trained target focus error analysis model to obtain a focus error analysis result;
And adjusting model parameters in the target focus characteristic fusion model according to the focus error analysis result.
8. A fusion device of a CT image and an MR image, the device comprising:
An image acquisition module for acquiring at least one CT image to be analyzed and at least one MR image including a target site; the CT image to be analyzed is a target scanning CT image comprising a target focus;
An associated lesion determination module configured to determine an associated lesion of the target lesion from the at least one MR image based on target lesion attribute information about the target lesion in the at least one CT image to be analyzed; wherein the target lesion attribute information includes: at least one of lesion size information and lesion density information;
and the focus feature fusion module is used for carrying out fusion treatment on the extracted focus features of the target focus and the associated focus to obtain target three-dimensional focus information of the target focus.
9. A computer device, the computer device comprising:
One or more processors;
A memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of fusion of CT images and MR images as claimed in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a method of fusion of CT images and MR images as claimed in any of claims 1-7.
CN202410173104.6A 2024-02-07 2024-02-07 Fusion method, device and equipment of CT image and MR image and storage medium Pending CN117994620A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410173104.6A CN117994620A (en) 2024-02-07 2024-02-07 Fusion method, device and equipment of CT image and MR image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410173104.6A CN117994620A (en) 2024-02-07 2024-02-07 Fusion method, device and equipment of CT image and MR image and storage medium

Publications (1)

Publication Number Publication Date
CN117994620A true CN117994620A (en) 2024-05-07

Family

ID=90887200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410173104.6A Pending CN117994620A (en) 2024-02-07 2024-02-07 Fusion method, device and equipment of CT image and MR image and storage medium

Country Status (1)

Country Link
CN (1) CN117994620A (en)

Similar Documents

Publication Publication Date Title
EP3511942B1 (en) Cross-domain image analysis using deep image-to-image networks and adversarial networks
EP3982292B1 (en) Method for training image recognition model, and method and apparatus for image recognition
CN108921851B (en) Medical CT image segmentation method based on 3D countermeasure network
CN111292314B (en) Coronary artery segmentation method, device, image processing system and storage medium
CN110414631B (en) Medical image-based focus detection method, model training method and device
CN110473186B (en) Detection method based on medical image, model training method and device
CN110400298B (en) Method, device, equipment and medium for detecting heart clinical index
CN109754007A (en) Peplos intelligent measurement and method for early warning and system in operation on prostate
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
CN109741312A (en) A kind of Lung neoplasm discrimination method, device, equipment and medium
CN110728673A (en) Target part analysis method and device, computer equipment and storage medium
CN113298831B (en) Image segmentation method and device, electronic equipment and storage medium
US20220101034A1 (en) Method and system for segmenting interventional device in image
CN112102276A (en) Low-field-intensity MR stomach segmentation method based on transfer learning image enhancement
CN112102294A (en) Training method and device for generating countermeasure network, and image registration method and device
CN113487656B (en) Image registration method and device, training method and device, control method and device
CN111340937A (en) Brain tumor medical image three-dimensional reconstruction display interaction method and system
CN109087357B (en) Scanning positioning method and device, computer equipment and computer readable storage medium
WO2020253138A1 (en) Classification method, apparatus and device, and storage medium
CN117994620A (en) Fusion method, device and equipment of CT image and MR image and storage medium
WO2022227193A1 (en) Liver region segmentation method and apparatus, and electronic device and storage medium
CN114037830A (en) Training method for enhanced image generation model, image processing method and device
CN113222989A (en) Image grading method and device, storage medium and electronic equipment
CN112530554A (en) Scanning positioning method and device, storage medium and electronic equipment
CN117710233B (en) Depth of field extension method and device for endoscopic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination