CN103985109B - Feature-level medical image fusion method based on 3D (three dimension) shearlet transform - Google Patents
Feature-level medical image fusion method based on 3D (three dimension) shearlet transform Download PDFInfo
- Publication number
- CN103985109B CN103985109B CN201410246721.0A CN201410246721A CN103985109B CN 103985109 B CN103985109 B CN 103985109B CN 201410246721 A CN201410246721 A CN 201410246721A CN 103985109 B CN103985109 B CN 103985109B
- Authority
- CN
- China
- Prior art keywords
- image
- transformation
- fusion
- omega
- sigma
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
本发明公开一种基于3D剪切波变换的特征级医学图像融合方法,属于医学图像处理及应用技术领域,该方法主要步骤为:一、对两幅图像进行3D‑D‑CSST变换或者3D‑DT‑CSST得到变换系数图像Ca、Cb;二、对变换系数进行图像融合,得到融合系数Cf;三、对步骤二融合后的系数Cf进行DWT或者DTCWT反变换,对变换后的图像进行后向shear变换得到融合图像Vf。本发明解决了融合图像质量相对较低以及局部重要但不显著的信息易被忽略掉的问题。
The invention discloses a feature-level medical image fusion method based on 3D shearlet transform, which belongs to the technical field of medical image processing and application. The main steps of the method are as follows: 1. Perform 3D-D-CSST transformation or 3D-CSST transformation on two images DT-CSST obtains transformation coefficient images C a , C b ; 2. Perform image fusion on the transformation coefficients to obtain the fusion coefficient C f ; 3. Perform DWT or DTCWT inverse transformation on the coefficient C f after step 2 fusion, and transform the transformed The image is subjected to backward shear transformation to obtain the fused image V f . The invention solves the problems that the fusion image quality is relatively low and the locally important but insignificant information is easy to be ignored.
Description
技术领域technical field
本发明属于医学图像处理及应用技术领域,具体涉及一种基于3D剪切波变换的特征级医学图像融合方法,解决融合图像质量相对较低以及局部重要但不显著的信息易被忽略掉的问题。The invention belongs to the technical field of medical image processing and application, and specifically relates to a feature-level medical image fusion method based on 3D shearlet transform, which solves the problem that the quality of the fused image is relatively low and locally important but insignificant information is easily overlooked .
背景技术Background technique
医疗图像融合是图像融合的一种,许多方法已经被广泛应用于临床诊断中。融合是指将如CT,MRI等不同设备所采集的源图像中关于目标的重要信息提取且合并成为一幅图像的过程。不同设备或同一设备不同配置所生成的图像中包含的信息是不同的,有些信息具有相似性,但大部分信息是互补的。例如,CT图像提供的主要是人体稠密、坚硬组织的信息,而MRI图像则主要提供软组织的信息。同一个MRI设备同一次采集的信息的后处理图像,如T2*提供的是组织弛豫时间的对比信息,磁量图(QSM:Quantitative SusceptibilityMapping)提供的是由多种磁性生物标记物(如铁、钙、钆等对比剂)引起的磁敏感对比信息。通常而言,图像融合需要先将源图像配准,而T2*和QSM图像是基于同一次扫描的数据进行后处理生成,所以二者已经完全配准了。Medical image fusion is a kind of image fusion, and many methods have been widely used in clinical diagnosis. Fusion refers to the process of extracting and merging important information about the target from source images collected by different devices such as CT and MRI into one image. The information contained in images generated by different devices or different configurations of the same device is different, some information is similar, but most of the information is complementary. For example, CT images mainly provide information on dense and hard tissues of the human body, while MRI images mainly provide information on soft tissues. The post-processing image of the information collected by the same MRI equipment at the same time, such as T2* provides the contrast information of tissue relaxation time, and the magnetometric map (QSM: Quantitative SusceptibilityMapping) provides information obtained by a variety of magnetic biomarkers (such as iron , calcium, gadolinium and other contrast agents) induced susceptibility contrast information. Generally speaking, image fusion needs to register the source images first, and the T2* and QSM images are generated based on the data of the same scan through post-processing, so the two have been fully registered.
当前医疗图像融合的研究主要考虑的是二维图像的情况,但是现在多类医疗设备都是生成三维图像。三维图像中每个点的灰度值不仅与同层邻近点互相关,也与相邻层中的邻近点互相关。传统的二维融合方法会导致第三维信息的丢失,因此有必要研究能直接处理三维图像的融合方法。The current research on medical image fusion mainly considers the situation of two-dimensional images, but now many types of medical equipment are generating three-dimensional images. The gray value of each point in a 3D image is not only cross-correlated with neighboring points in the same layer, but also cross-correlated with neighboring points in adjacent layers. The traditional two-dimensional fusion method will lead to the loss of third-dimensional information, so it is necessary to study fusion methods that can directly process three-dimensional images.
融合算法可以在空域或变换域进行处理。空域中,融合图像通常是源数据的加权平均,这样的方法简单易于实现,但融合图像质量不高。变换域方法遵循以下步骤:1)源图像变换到变换域,2)按融合准则对图像系数进行处理,得到融合后的系数,3)最后将系数变回到空域,输出即为融合图像。这类算法中研究重点主要集中在两点:变换的选取和融合准则的设计。许多多尺度(Multi-Scale)变换都能应用于融合算法中,如DWT,DTCWT,Curvelet,Shearlet等等。Fusion algorithms can be processed in the spatial or transform domain. In the air domain, the fused image is usually the weighted average of the source data. This method is simple and easy to implement, but the quality of the fused image is not high. The transform domain method follows the following steps: 1) Transform the source image into the transform domain, 2) process the image coefficients according to the fusion criterion to obtain the fused coefficients, 3) finally transform the coefficients back to the space domain, and the output is the fused image. The research focus of this type of algorithm mainly focuses on two points: the selection of transformation and the design of fusion criterion. Many multi-scale (Multi-Scale) transformations can be applied to fusion algorithms, such as DWT, DTCWT, Curvelet, Shearlet, etc.
剪切波变换是近几年提出并逐步成熟的多维数据高效表示的变换。实际上,针对小波变换缺乏对边缘等方向性特征稀疏表示的缺点,学者们也被提出了许多其他的多尺度变换。但剪切波变换是所有方法中唯一同时拥有以下优点的变换:只有一个或有限个产生函数集合,能几乎最优的表示高维数据,对连续数据和离散数据统一处理,拥有紧支实现等等。剪切波变换已广泛应用于图像处理中,如去噪,缘检测,增强等。Shearlet transform is a transformation proposed in recent years and gradually matured for the efficient representation of multi-dimensional data. In fact, many other multi-scale transforms have been proposed by scholars to address the shortcomings of wavelet transform that lacks sparse representation of directional features such as edges. However, the shearlet transform is the only transformation among all methods that has the following advantages at the same time: there is only one or a limited set of generating functions, it can represent high-dimensional data almost optimally, it can handle continuous data and discrete data uniformly, and it has compact support, etc. Wait. Shearlet transform has been widely used in image processing, such as denoising, edge detection, enhancement and so on.
剪切波也同样适用于图像融合,现有的图像融合技术具有如下缺陷:1、传统基于小波变换和金字塔变换的融合方法,因多尺度变换缺乏对图像结构方向性的稀疏表示能力,导致融合图像的质量相对较低;2、基于像素级的图像融合,没有考虑图像的结构信息,会导致图像融合时,局部重要确不显著的信息而被忽略掉。这些缺陷会对最终的医疗诊断产生不利影响。Shearlet is also suitable for image fusion. The existing image fusion technology has the following defects: 1. The traditional fusion method based on wavelet transform and pyramid transform lacks the sparse representation ability of image structure direction due to multi-scale transformation, resulting in fusion The quality of the image is relatively low; 2. The image fusion based on the pixel level does not consider the structural information of the image, which will cause the locally important but insignificant information to be ignored during the image fusion. These defects can adversely affect the final medical diagnosis.
发明内容Contents of the invention
针对上述现有技术,本发明目的在于提供一种基于3D剪切波变换的特征级医学图像融合方法,解决基于小波变换和金字塔变换的融合方法,因多尺度变换缺乏对图像结构方向性的稀疏表示能力而导致融合图像的质量相对较低;以及图像融合时,局部重要但不显著的信息易被忽略掉的缺陷,这些缺陷最终会对医疗诊断产生不利影响。In view of the above-mentioned prior art, the purpose of the present invention is to provide a feature-level medical image fusion method based on 3D shearlet transform, which solves the problem of the fusion method based on wavelet transform and pyramid transform. The quality of the fused image is relatively low due to the representation ability; and when the image is fused, locally important but insignificant information is easily ignored, and these defects will eventually have an adverse effect on medical diagnosis.
为了解决上述技术问题,本发明采用如下技术方案:In order to solve the above technical problems, the present invention adopts the following technical solutions:
本文中3D剪切波具体是指3D紧支剪切波(3D-D-剪切波)或3D双树紧支剪切波(3D-DT-剪切波),D-剪切波变换包含两个步骤:前向shear变换和DWT变换;DT-剪切波变换包含两个步骤:前向shear变换和DTCWT变换。In this paper, 3D shear wave specifically refers to 3D compactly supported shear wave (3D-D-shear wave) or 3D dual tree compactly supported shear wave (3D-DT-shear wave), and the D-shear wave transform includes Two steps: forward shear transform and DWT transform; DT-shearlet transform consists of two steps: forward shear transform and DTCWT transform.
一种基于3D剪切波变换的特征级医学图像融合方法,其特征在于,包括如下步骤:A feature-level medical image fusion method based on 3D shearlet transform, characterized in that it comprises the following steps:
一、准备待融合的两幅3D医学图像Va、Vb,分别对两幅图像的三个方向进行前向shear变换,对变换后的图像进行离散小波变换DWT或者双树复数小波变换DTCWT,得到相应的多组变换图像系数Ca、Cb;1. Prepare two 3D medical images V a and V b to be fused, respectively perform forward shear transformation on the three directions of the two images, and perform discrete wavelet transform DWT or dual-tree complex wavelet transform DTCWT on the transformed image, Obtain corresponding sets of transformed image coefficients C a , C b ;
二、对3D剪切波变换得到的系数进行图像融合,得到融合图像系数Cf;2. Perform image fusion on the coefficients obtained by the 3D shearlet transform to obtain the fusion image coefficient C f ;
三、对步骤二融合后的图像系数Cf进行DWT或者DTCWT反变换,对变换后的图像进行后向shear变换得到多组融合图像,对这些图像进行平均得到最终融合图像Vf。3. Perform DWT or DTCWT inverse transformation on the fused image coefficient C f in step 2, perform backward shear transformation on the transformed image to obtain multiple sets of fused images, and average these images to obtain the final fused image V f .
在本发明中,所述步骤二的详细步骤包括以下两步:In the present invention, the detailed steps of said step 2 include the following two steps:
2.1、对3D剪切波变换得到的图像Ca、Cb的低频部分CaL、CbL采用均值准则得到融合图像的低频部分CfL;2.1. For the low-frequency parts C aL and C bL of the images C a and C b obtained by the 3D shearlet transform, the low-frequency part C fL of the fused image is obtained by using the mean value criterion;
2.2、对高频部分CaH、CbH采用特征级的融合,判断同一位置待融合图像的特征类型,通过最大保留信息准则进行融合,得到CfH;2.2. Use feature-level fusion for the high-frequency parts C aH and C bH to judge the feature type of the image to be fused at the same position, and perform fusion through the maximum retained information criterion to obtain C fH ;
在本发明中,在所述步骤一中,先对图像进行前向shear变换,再对变换后的图像进行离散小波变换DWT或者双树复数小波变换DTCWT;前向shear变换具体如下:对于一组三维数据l×m×n建立坐标系,原点为(0、0、0),其对角点为(l-1、m-1、n-1),对其进行三个方向的shear变换如下所示:其中针对z方向的shear变换是指对数据中的点进行如下坐标变换:In the present invention, in said step one, the forward shear transformation is first carried out to the image, and then the discrete wavelet transform DWT or the dual-tree complex wavelet transform DTCWT is carried out to the transformed image; the forward shear transformation is specifically as follows: for a group The three-dimensional data l×m×n establishes a coordinate system, the origin is (0, 0, 0), and its diagonal point is (l-1, m-1, n-1), and the shear transformation in three directions is as follows Shown: The shear transformation for the z direction refers to the following coordinate transformation of the points in the data:
针对x方向的shear变换公式为:The shear transformation formula for the x direction is:
针对y方向做shear变换公式为:The shear transformation formula for the y direction is:
其中,(x、y、z)为变换前的坐标,(x’、y’、z’)为变换后的坐标。ktr,{tr=a1,b1,a2,b2,a3,b3}为移动的最大距离。ktr取不同的值,就会获得保留不同方向的信息,因而剪切波变换会产生个3D图像,其中和为kai和kbi的方向个数。Among them, (x, y, z) are coordinates before transformation, and (x', y', z') are coordinates after transformation. k tr , {tr=a1, b1, a2, b2, a3, b3} is the maximum moving distance. If k tr takes different values, information in different directions will be retained, so the shearlet transform will produce 3D images, where with is the number of directions of k ai and k bi .
所述步骤2.1中对变换图像的低频部分采用特征级融合,其融合规则为:In the step 2.1, feature-level fusion is adopted for the low-frequency part of the transformed image, and the fusion rules are:
CfL=(CaL+CaL)/2(2)Cf L =(C aL +C aL )/2(2)
所述步骤2.2中对变换图像的高频部分采用特征级融合,具体操作步骤如下:In the step 2.2, feature-level fusion is adopted for the high-frequency part of the transformed image, and the specific operation steps are as follows:
2.2.1、先计算变换图像系数Ca、Cb的高频部分CaH、CbH的结构张量,再对结构张量进行秩分析:2.2.1. First calculate the structural tensors of the high-frequency parts C aH and C bH of the transformed image coefficients C a and C b , and then perform rank analysis on the structural tensors:
对于变换图像系数Ca、Cb的高频部分CaH、CbH的每个点,结构张量是一个3×3矩阵,矩阵的秩可取0、1、2、3,分别对应图像中的平坦、面状、线状、点状区域特征;Ω为局部区域l1×m1×n1,点p的结构张量表示为For each point of the high-frequency part C aH , C bH of the transformed image coefficients C a , C b , the structure tensor is a 3×3 matrix, and the rank of the matrix can be 0, 1, 2, 3, corresponding to the Features of flat, planar, linear, and point-like areas; Ω is the local area l 1 ×m 1 ×n 1 , and the structure tensor of point p is expressed as
w(r)是一个l1×m1×n1大小的高斯模板;Vx(p)、Vy(p)、Vz(p)分别为图像对x、y、z轴三个方向上的偏导数;w(r) is a Gaussian template of l 1 ×m 1 ×n 1 size; V x (p), V y (p), V z (p) are the three directions of the image pair x, y, and z respectively The partial derivative;
计算此3×3张量矩阵的特征值Ex、Ey、Ez,设定阈值 k为控制参数、设为0.01,点p的非零特征值个数对于两幅图的同一位置,记Ca的非零特征值个数为Ma,记Cb的非零特征值个数为Mb,Ma、Mb作为张量矩阵的秩的近似;Calculate the eigenvalues E x , E y , E z of this 3×3 tensor matrix, and set the threshold k is the control parameter, set to 0.01, the number of non-zero eigenvalues of point p For the same position of the two pictures, record the number of non-zero eigenvalues of C a as M a , record the number of non-zero eigenvalues of C b as M b , and M a and M b are the approximation of the rank of the tensor matrix;
2.2.2、如果Ma=Mb,那么两幅图在这个位置具有相同类型特征,计算这个位置的相似度2.2.2. If M a =M b , then the two images have the same type of features at this position, and calculate the similarity at this position
计算阈值融合规则为:Calculate threshold The fusion rules are:
γab≤α时,这个位置为冗余信息,选择加权准则:When γ ab ≤ α, this position is redundant information, and the weighting criterion is selected:
CfH=ωaCaH+ωbCbH (5)C fH = ω a C aH + ω b C bH (5)
γab>α时,这个位置为互补信息,采用MRE准则:When γ ab >α, this position is complementary information, using the MRE criterion:
2.2.3、如果Ma≠Mb,融合准则:2.2.3. If M a ≠ M b , fusion criterion:
所述步骤3对DWT或DTCWT逆变换后的图像做后向shear变换,具体如下:The step 3 performs backward shear transformation on the image after DWT or DTCWT inverse transformation, as follows:
对于后向shear变换是指前向shear变换的逆操作,其中针对z方向的shear变换是指对数据中的点进行如下坐标变换:For the backward shear transformation, it refers to the inverse operation of the forward shear transformation, where the shear transformation for the z direction refers to the following coordinate transformation of the points in the data:
针对x方向的shear变换公式为:The shear transformation formula for the x direction is:
针对y方向做shear变换公式为:The shear transformation formula for the y direction is:
其中,(x、y、z)为变换前的坐标,(x’、y’、z’)为变换后的坐标,ktr,{tr=a1,b1,a2,b2,a3,b3}的取值对应于前向shear变换所取值。Among them, (x, y, z) are coordinates before transformation, (x', y', z') are coordinates after transformation, k tr , {tr=a1,b1,a2,b2,a3,b3} The value corresponds to the value of the forward shear transformation.
与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:
一、相对于基于传统Wavelet与金字塔变换等缺乏对方向结构特性的稀疏表示能力的多尺度变换而言,紧支撑剪切波变换具有几乎最优表示高维信号中的各向异特征的能力,融合图像保留更准确的方向信息,导致融合质量更高;1. Compared with multi-scale transformations based on traditional Wavelet and pyramid transformations that lack the ability to sparsely represent directional structural properties, compactly supported shearlet transformations have the ability to almost optimally represent anisotropic features in high-dimensional signals. Fused images retain more accurate orientation information, resulting in higher fusion quality;
二、DT-剪切波变换引入双树结构,减少了移变性造成的融合图像失真;2. The DT-shearlet transform introduces a dual tree structure, which reduces the fusion image distortion caused by shift variability;
三、DT-剪切波和D-剪切波的空域紧支性,相对于频域剪切波,融合质量更高;3. The space domain tight support of DT-shear wave and D-shear wave, compared with the frequency domain shear wave, the fusion quality is higher;
四、本发明使用特征级融合方法,考虑了扫描器官内部结构特性(包括平坦、面状、线状、点状区域),最大限度保留对象结构信息和物理特征,相对于只考虑高频系数统计特征的像素级融合准则,融合图像质量更高;4. The present invention uses a feature-level fusion method, which considers the internal structural characteristics of the scanned organ (including flat, planar, linear, and point-like regions), and retains the structural information and physical characteristics of the object to the greatest extent. Compared with only considering high-frequency coefficient statistics The pixel-level fusion criterion of features, the fusion image quality is higher;
五、本发明融合图像的质量通过客观指标(MI、QAB|F)衡量,其质量更高。5. The quality of the fused image in the present invention is measured by objective indicators (MI, QAB|F), and the quality is higher.
附图说明Description of drawings
图1为本发明图像融合方法示意图;Fig. 1 is a schematic diagram of the image fusion method of the present invention;
图2为二维shear变换示意图;Fig. 2 is a schematic diagram of two-dimensional shear transformation;
图3为三维shear变换(z轴方向shear变换)示意图。FIG. 3 is a schematic diagram of three-dimensional shear transformation (shear transformation in the z-axis direction).
具体实施方式detailed description
下面将结合附图及具体实施方式对本发明作进一步的描述。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.
以T2*幅度图像和QSM图像为例,本实验用本发明方法对三维T2*幅度图像和QSM图像进行处理,最终得到融合图像,范例中,图像大小为128×128×128。Taking T2* magnitude image and QSM image as an example, this experiment uses the method of the present invention to process the three-dimensional T2* magnitude image and QSM image, and finally obtains a fusion image. In the example, the size of the image is 128×128×128.
本实施例的图像融合方法首先考虑如何表现三维图像的各向异性,shear变换可以良好表现图像的各向异特征;其次考虑DWT变换带来的平移变化性质的影响,因而进行双树复数小波变换DTCWT;最后考虑将低频系数和高频系数所带的信息尽可能多的融合到融合系数图像中:对于低频采用均值准则;对于高频系数采用特征级的融合准则进行融合。The image fusion method of this embodiment first considers how to represent the anisotropy of the three-dimensional image, and the shear transform can well represent the anisotropic characteristics of the image; secondly, considering the influence of the translational change properties brought about by the DWT transform, a dual-tree complex wavelet transform is performed DTCWT; Finally, it is considered to fuse as much information as possible into the fusion coefficient image with the low-frequency coefficients and high-frequency coefficients: for low-frequency coefficients, the mean criterion is used; for high-frequency coefficients, the feature-level fusion criterion is used for fusion.
流程如图1所示,包括以下步骤:The process is shown in Figure 1 and includes the following steps:
步骤一:对两幅图像Va、Vb进行前向3D-DT-剪切波变换得到变换系数Ca、Cb。执行过程包括三维前向shear变换和三维DTCWT变换。Step 1: Perform forward 3D-DT-shearlet transformation on two images V a and V b to obtain transformation coefficients C a and C b . The implementation process includes 3D forward shear transformation and 3D DTCWT transformation.
前向shear变换:对于一组三维数据l×m×n建立坐标系,原点为(0、0、0),其对角点为(l-1、m-1、n-1),对其进行z轴方向shear变换是指对数据中的点的x、y坐标进行变换Forward shear transformation: establish a coordinate system for a set of three-dimensional data l×m×n, the origin is (0, 0, 0), and its diagonal point is (l-1, m-1, n-1), and its Performing shear transformation in the z-axis direction refers to transforming the x and y coordinates of points in the data
对其他两个方向的shear变换公式为:The shear transformation formula for the other two directions is:
图2是shear变换的示意图,对于本实例,l=m=n=128,选择ktr,{tr=a1,b1,a2,b2,a3,b3}=-64,0,64,将产生27组变换图像数据;Fig. 2 is a schematic diagram of shear transformation. For this example, l=m=n=128, select k tr , {tr=a1,b1,a2,b2,a3,b3}=-64,0,64, will produce 27 group transformation image data;
步骤二:对3D-DT-剪切波变换得到27组系数图像数据Ca、Cb进行融合得到Cf。执行过程包括对低频系数融合与对高频系数融合。Step 2: Fusing 27 sets of coefficient image data C a and C b obtained from 3D-DT-shearlet transform to obtain C f . The execution process includes fusing low-frequency coefficients and fusing high-frequency coefficients.
对3D-DT-剪切波变换得到的系数Ca、Cb的低频部分CaL、CbL采用均值准则融合:The low-frequency parts C aL and C bL of the coefficients C a and C b obtained by 3D-DT-shearlet transform are fused using the mean criterion:
CfL=(CaL+CbL)/2C fL =(C aL +C bL )/2
2)对于高频部分CaH、CbH,选择点p的局部区域Ω大小为l1×m1×n1,这里选为3×3×3,计算p点的结构张量:2) For the high-frequency parts C aH and C bH , select the size of the local area Ω of point p to be l 1 ×m 1 ×n 1 , here 3×3×3, and calculate the structure tensor of point p:
w(r)是一个l1×m1×n1大小的高斯模板;Vx(p)、Vy(p)、Vz(p)分别为图像对x、y、z轴三个方向上的偏导数。w(r) is a Gaussian template of l 1 ×m 1 ×n 1 size; V x (p), V y (p), V z (p) are the three directions of the image pair x, y, and z respectively partial derivative of .
体素之间都有相关性,很难有严格意义上的平坦、面状、线状和点状区域,所以在提取图像结构特征时,对非零特征值这个条件作了适当的放宽。对于结构张量的某一特征值小于相应阈值时,认为此特征值为零,进而认为大于阈值的特征值个数为矩阵的秩。计算此3×3矩阵的特征值Ex、Ey、Ez,设定阈值k为控制参数,可设为0.01,点p的非零特征值个数M近似为张量矩阵的秩,对于两幅图的同一位置,Ma记录CaH大于阀值的特征值个数,Mb记录CbH大于阀值的特征值个数;There is correlation between voxels, and it is difficult to have flat, planar, linear, and point-like areas in the strict sense. Therefore, when extracting image structure features, the condition of non-zero eigenvalues is appropriately relaxed. When an eigenvalue of the structure tensor is less than the corresponding threshold, the eigenvalue is considered to be zero, and the number of eigenvalues greater than the threshold is considered to be the rank of the matrix. Calculate the eigenvalues E x , E y , E z of this 3×3 matrix, and set the threshold k is the control parameter, which can be set to 0.01, the number of non-zero eigenvalues of point p M is approximately the rank of the tensor matrix. For the same position of the two images, M a records the number of eigenvalues with C aH greater than the threshold value, and M b records the number of eigenvalues with C bH greater than the threshold value;
如果Ma=Mb,那么两幅图在这个位置具有相同类型特征,则计算这个位置的相似度If M a =M b , then the two images have the same type of features at this position, then calculate the similarity at this position
计算阈值融合规则为:Calculate threshold The fusion rules are:
γab≤α时,这个位置为冗余信息,选择加权准则When γ ab ≤ α, this position is redundant information, and the weighting criterion is selected
CfH=ωaCaH+ωbCbH C fH = ω a C aH + ω b C bH
γab>α时,这个位置为互补信息,采用MRE准则:When γ ab >α, this position is complementary information, using the MRE criterion:
如果Ma≠Mb,融合准则: If M a ≠ M b , fusion criterion:
步骤三:对融合系数图像Cf进行后向3D-DT-剪切波变换得到最终融合图像。执行过程包括三维前向shear变换和三维DTCWT变换Step 3: performing backward 3D-DT-shearlet transform on the fusion coefficient image C f to obtain the final fusion image. The execution process includes 3D forward shear transformation and 3D DTCWT transformation
对于后向shear变换是指前向shear变换的逆操作,其中针对z方向的shear变换是指对数据中的点进行如下坐标变换:For the backward shear transformation, it refers to the inverse operation of the forward shear transformation, where the shear transformation for the z direction refers to the following coordinate transformation of the points in the data:
针对x方向的shear变换公式为:The shear transformation formula for the x direction is:
针对y方向做shear变换公式为:The shear transformation formula for the y direction is:
选择l=m=n=128,选择ktr,{tr=a1,b1,a2,b2,a3,b3}=-64、0、64;最后对27个反向变换后的3D图像做平均得到融合图像Vf。Select l=m=n=128, select k tr , {tr=a1, b1, a2, b2, a3, b3}=-64, 0, 64; finally average the 27 reverse-transformed 3D images to get Fused image V f .
以上所述,仅为本发明优选的实施例,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明所公开的范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都属于本发明的保护范围。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto, any person familiar with the technical field within the scope disclosed in the present invention, according to the technical scheme of the present invention and its Any equivalent replacement or change of the inventive concept falls within the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410246721.0A CN103985109B (en) | 2014-06-05 | 2014-06-05 | Feature-level medical image fusion method based on 3D (three dimension) shearlet transform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410246721.0A CN103985109B (en) | 2014-06-05 | 2014-06-05 | Feature-level medical image fusion method based on 3D (three dimension) shearlet transform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103985109A CN103985109A (en) | 2014-08-13 |
CN103985109B true CN103985109B (en) | 2017-05-10 |
Family
ID=51277067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410246721.0A Active CN103985109B (en) | 2014-06-05 | 2014-06-05 | Feature-level medical image fusion method based on 3D (three dimension) shearlet transform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103985109B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268833B (en) * | 2014-09-15 | 2018-06-22 | 江南大学 | Image interfusion method based on translation invariant shearing wave conversion |
CN107845079A (en) * | 2017-11-15 | 2018-03-27 | 浙江工业大学之江学院 | 3D shearlet medicine CT video denoising methods based on compact schemes |
CN110084772B (en) * | 2019-03-20 | 2020-12-29 | 浙江医院 | MRI/CT fusion method based on bending wave |
CN110223371B (en) * | 2019-06-14 | 2020-12-01 | 北京理工大学 | Shearlet Transform and Volume Rendering Opacity Weighted 3D Image Fusion Method |
CN111583330B (en) * | 2020-04-13 | 2023-07-04 | 中国地质大学(武汉) | Multi-scale space-time Markov remote sensing image sub-pixel positioning method and system |
CN111481827B (en) * | 2020-04-17 | 2023-10-20 | 上海深透科技有限公司 | Quantitative susceptibility imaging and method for locating target area of potential stimulation of DBS |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049895B (en) * | 2012-12-17 | 2016-01-20 | 华南理工大学 | Based on the multimode medical image fusion method of translation invariant shearing wave conversion |
-
2014
- 2014-06-05 CN CN201410246721.0A patent/CN103985109B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN103985109A (en) | 2014-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103985109B (en) | Feature-level medical image fusion method based on 3D (three dimension) shearlet transform | |
CN110599528B (en) | Unsupervised three-dimensional medical image registration method and system based on neural network | |
CN104282007B (en) | Based on the adaptive Method of Medical Image Fusion of non-sampled profile wave convert | |
Suarez et al. | Automated delineation of white matter fiber tracts with a multiple region-of-interest approach | |
CN106204561A (en) | Prostate multi-modality images non-rigid registration method based on mixed model | |
Xing et al. | PDE-based spatial smoothing: a practical demonstration of impacts on MRI brain extraction, tissue segmentation and registration | |
CN111429474A (en) | Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution | |
CN110084823A (en) | Three-dimensional brain tumor image partition method based on cascade anisotropy FCNN | |
CN104574298A (en) | Multi-b-value DWI (diffusion weighted image) noise reduction method based on mutual information | |
CN113538616B (en) | A MRI Image Reconstruction Method Combined with PUGAN and Improved U-net | |
CN103871066B (en) | The building method of similarity matrix in ultrasonoscopy Ncut segmentation | |
CN107424145A (en) | The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks | |
CN106910179A (en) | Multimode medical image fusion method based on wavelet transformation | |
CN110782434B (en) | An intelligent marking and positioning device for brain tuberculosis MRI image lesions | |
CN101887581A (en) | Image fusion method and device | |
CN115018728A (en) | Image fusion method and system based on multi-scale transformation and convolution sparse representation | |
CN107610165A (en) | The 3 D shearing multi-modal medical image sequence fusion methods of wave zone based on multiple features | |
CN113222979A (en) | Multi-map-based automatic skull base foramen ovale segmentation method | |
CN107240103B (en) | A Boundary Processing Method in Digital Volume Correlation Algorithm Based on Image Segmentation | |
CN102156966A (en) | Medical image denoising | |
Garg et al. | Multilevel medical image fusion using segmented image by level set evolution with region competition | |
Zhang et al. | Registration of diffusion tensor images | |
Patel et al. | Medical image fusion based on multi-scaling (DRT) and multi-resolution (DWT) technique | |
Rashid et al. | Single MR image super-resolution using generative adversarial network | |
CN115984257A (en) | Multi-modal medical image fusion method based on multi-scale transform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |