CN107644434B - Multi-modal medical image registration method fusing gradient information and generalized entropy similarity - Google Patents

Multi-modal medical image registration method fusing gradient information and generalized entropy similarity Download PDF

Info

Publication number
CN107644434B
CN107644434B CN201710778551.4A CN201710778551A CN107644434B CN 107644434 B CN107644434 B CN 107644434B CN 201710778551 A CN201710778551 A CN 201710778551A CN 107644434 B CN107644434 B CN 107644434B
Authority
CN
China
Prior art keywords
image
gradient
representing
registered
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710778551.4A
Other languages
Chinese (zh)
Other versions
CN107644434A (en
Inventor
李碧草
刘洲峰
王贝
张爱华
黄杰
舒华忠
朱永胜
刘闪亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongyuan University of Technology
Original Assignee
Zhongyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongyuan University of Technology filed Critical Zhongyuan University of Technology
Priority to CN201710778551.4A priority Critical patent/CN107644434B/en
Publication of CN107644434A publication Critical patent/CN107644434A/en
Application granted granted Critical
Publication of CN107644434B publication Critical patent/CN107644434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)

Abstract

The invention discloses a multi-modal medical image registration method fusing gradient information and generalized entropy similarity, which comprises the following steps: s1, constructing generalized entropy similarity measure; s2, constructing the gradient measure of the image to be registered; s3, constructing similarity measure of the image to be registered; s4, constructing a registration model of the multi-modal image; and S5, solving the registration model. The invention comprehensively considers the gradient information and the gray distribution characteristics of medical images in different modes, and effectively completes the accurate registration of the multi-mode medical images. And the generalized entropy similarity measure is constructed based on a generalized information entropy, and the method has wider properties. The gray difference between the images is not considered, but the information theory similarity between the images to be registered is calculated by utilizing the joint probability distribution of the gray values of the two images, so that the method is not only suitable for medical images in the same modality, but also suitable for medical images in different modalities with larger gray difference, and the application range of the method is expanded.

Description

Multi-modal medical image registration method fusing gradient information and generalized entropy similarity
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a multi-modal medical image registration method fusing gradient information and generalized entropy similarity.
Background
The medical image registration is an important technology in the field of medical image processing, and plays an important role in medical information fusion, tumor growth monitoring, image-guided surgery treatment, radiotherapy planning and the like. Rapid development and wide application of microelectronics, computers, information science, and the like have driven rapid development of medical imaging technologies. The continuous development of imaging devices has led to a dramatic increase in the amount of information available to physicians from medical images. The image information acquired by a single imaging device cannot meet the requirements of certain specific applications, and the prior image processing method cannot well deal with new problems brought by the combined use of multiple imaging devices. Medical image registration can fuse together various complementary information, providing more reliable information for a physician to diagnose a condition.
The research work of medical image registration attracts the attention of numerous scholars, and the registration technology based on the information theory is not dependent on the gray value difference of the images and does not need preprocessing such as feature extraction and segmentation, so the registration technology is widely concerned in multi-modal medical image registration. The maximum mutual information is the most common information theory registration method, in the registration process, the mutual information is used for measuring the similarity degree between the reference image and the transformed floating image, and the space transformation of the image to be registered when the similarity degree is maximum is obtained through an optimization method. Such methods include similarity measure construction, transformation model selection, optimization scheme design, etc., however, mutual Information-based registration methods do not take into account the interrelationship between two independent subsystems [ reference [1] Khader M, Hamza a b.
Pseudo-superposition is an important property of non-extended entropy, and a registration method based on Tsallis entropy can make up for the defect, thereby effectively improving the registration accuracy of medical images (reference [2]: Khader M, Hamza A B. information-electronic method for multimodal medical image registration [ J ]. Expert Systems with Applications,2012,39(5): 5548-. In addition, Arimoto entropy is a non-expanding entropy, and registration methods based on this entropy are used for non-rigid registration of three-dimensional medical images to obtain accurate registration results (reference [3]: Li B, Yang G, Coatriux J L, et al.3D non-linear medical registration using a new information for the same [ J ]. Physics in medical and biological, 2015,60(22): 8767.).
However, the above registration method based on the information theory measure estimates the joint probability distribution only by using the gray information of the pixel points in the image to be registered, and does not consider the correlation between the pixels on the image, i.e. neglects the spatial information between the pixels of the image.
The gradient of the image describes the relationship between the pixel and the adjacent pixels, which is important image information describing the spatial information of the image. Pluim et al propose a new similarity measure combining normalized mutual information and gradient information (ref [4]: Pluim J P W, Maintz J B A, Vierger M A. image registration by normalized mutual information and gradient information [ J ]. IEEE Transmission Medical Imaging,2000,19(8):809 814.), which not only considers the model of the image gradient to be registered, but also fuses the direction information of the gradient and uses it for Medical image registration to obtain higher registration accuracy.
Melbourne et al registered magnetic resonance images of the breast with fractional step information of the image to be registered (reference [5]: Melbourne A, Cahill N, Tanner C, et al using fractional visualization in non-linear image registration: application to Breast MRI [ C ]// SPIE Medical imaging. International Society for Optics and Photonics, 2012.). The method comprises the steps of convolving a fractional order Gaussian kernel with an image to be registered, determining the gradient of the fractional order image, and performing image registration by combining a fractional order diffusion equation, wherein a proper alpha parameter value needs to be set in the registration process.
Local measures based on gradient direction are used for registering multi-modal Medical images (reference [6]: De Nigris D, Collins D L, Arbel T. Multi-modal image registration based on gradient orientations of the minor unperturbation [ J ]. IEEE Transactions on Medical Imaging,2012,31(12): 2343-. The similarity is constructed using a least uncertain gradient direction, where the norm of the image gradient is greater than a user-specified threshold, and a multi-layered framework is used to perform non-rigid registration. Registration experiments of preoperative brain magnetic resonance images and intraoperative ultrasonic images prove that the method can obtain low registration errors, however, the method only utilizes direction information of gradients, and a mode of image gradients is not considered.
Disclosure of Invention
In view of the above-described deficiencies in the prior art, the present invention provides a multi-modal medical image registration method that fuses gradient information and generalized entropy similarity. The invention realizes the registration of the multi-modal medical image by using a method of combining gradient information and information theory measure.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a multi-modal medical image registration method fusing gradient information and generalized entropy similarity comprises the following steps:
and S1, constructing a generalized entropy similarity measure.
S1.1, Arimoto entropy A is definedα
Figure BDA0001396415410000041
Wherein X is a discrete random variable, α is a parameter for controlling the non-expansibility of the Arimoto entropy, M is the number of elements of the discrete random variable X, i is the number of elements of the probability distribution P of the discrete random variable X, and PiIs the ith element of the probability distribution P.
The limit of Arimoto entropy when α → 1 is found to be equal to shannon entropy according to lobida's law, so shannon entropy can be considered as a special case of Arimoto entropy.
S1.2, construct Jensen-Arimoto Divergene (Jansen Arimoto divergence, JAD).
Figure BDA0001396415410000051
In the formula, Aα(. for) Arimoto entropy, ωiRepresents a weighting factor, and ωi≥0,∑ωiSince the limit of Arimoto entropy is shannon entropy when α tends towards 1, the limit of JAD is jensen shannon divergence when α → 1.
And S1.3, constructing generalized entropy similarity measure.
Figure BDA0001396415410000052
Wherein R (x) is a reference image; f is a floating image; f (T)μ(x) ) is the transformed floating image; t represents a spatial transformation; x is a pixel point of the reference image R; t isμ(x) Representing the transformed corresponding coordinate point, fjRepresenting the grey level, r, of the converted floating imageiRepresents the gray level of the reference image r (x); p (f)j|ri) Representing a conditional probability distribution of the transformed floating image given the known reference image; p (r)i) Representing a probability distribution of a reference image; p (f)j) Representing the probability distribution of the floating image under the action of the spatial transformation T; p (r)i,fj) Representing the joint probability distribution of the floating image and the reference image under the T-transform.
S2, constructing a gradient measure of the image to be registered.
And S2.1, obtaining the gradient direction of the image to be registered.
S2.1.1 calculating the gradient of the reference image R
Figure BDA0001396415410000065
Figure BDA0001396415410000061
Wherein the content of the first and second substances,
Figure BDA0001396415410000066
representing the gradient of the reference image in the horizontal direction;
Figure BDA0001396415410000067
representing the gradient in the vertical direction of the reference image;
Figure BDA0001396415410000068
represents a convolution; gx' (σ) denotes the first derivative in the x-direction of the Gaussian kernel with the scale σ; gy' (σ) denotes the first derivative in the y-direction of the Gaussian kernel with the scale σ.
S2.1.2, computing the floating image F (T) under the action of the space transformation Tμ(x) Gradient of (2)
Figure BDA0001396415410000069
Figure BDA0001396415410000062
Wherein T represents a spatial transformation; t isμ(x) Representing the transformed corresponding coordinate points; mu is a space transformation parameter;
Figure BDA00013964154100000610
representing the gradient of the horizontal direction of the transformed floating image;
Figure BDA00013964154100000611
representing the gradient in the vertical direction of the transformed floating image.
S2.1.3 calculating the gradient of the reference image
Figure BDA00013964154100000612
Gradient of floating image under T transform
Figure BDA00013964154100000613
The included angle between them;
Figure BDA0001396415410000063
wherein the content of the first and second substances,
Figure BDA00013964154100000614
representing a gradient
Figure BDA00013964154100000615
The mold of (4);
Figure BDA00013964154100000616
representing a gradient
Figure BDA00013964154100000617
The die of (1).
S2.1.4, calculating the direction of the image gradient to be registered.
Figure BDA0001396415410000064
And S2.2, obtaining a mode of the gradient of the image to be registered.
S2.2.1 calculating reference image gradients
Figure BDA0001396415410000077
The mold of (4);
Figure BDA0001396415410000071
s2.2.2, calculating the gradient of the floating image under T transform
Figure BDA0001396415410000078
The die of (1).
Figure BDA0001396415410000072
S2.2.3, obtaining a modulus of the image gradient to be registered.
Figure BDA0001396415410000073
Where σ denotes the scale of the gaussian kernel and μ is a spatial transformation parameter.
S2.3, calculating the gradient measure of the image to be registered;
Figure BDA0001396415410000074
in the formula (I), the compound is shown in the specification,
Figure BDA0001396415410000075
which indicates the direction of the gradient and,
Figure BDA0001396415410000076
the modulus representing the gradient, ω, can be regarded as the weight of the gradient modulus, in the reference image R (x) and in the transformed floating image F (T)μ(x) ) the weighted sum of the pixels in the overlapping portion.
S3, constructing a similarity measure of the images to be registered.
S(R(x),F(Tμ(x)))=G(R(x),F(Tμ(x)))JAα(F(Tμ(x)),R(x)) (12);
Wherein G (R (x), F (T)μ(x) ) is in the spatial transformation TμThe gradient measurement of the floating image and the reference image under the action represents the gradient information between the images to be registered; JA (JA)α(F(Tμ(x) R (x)) represents the transformation T in spaceμAnd measuring the generalized entropy similarity of the floating image under the action and the reference image.
S4, constructing a registration model of the multi-modal image;
Figure BDA0001396415410000081
substituting the similarity measure of the image to be registered into a formula 13 to obtain a final registration function;
Figure BDA0001396415410000082
s5, solving the registration model by adopting a directional acceleration method to obtain optimal space transformation parameters;
s5.1, initializing, and setting an initial point, a maximum iteration number, an allowable error and n linearly independent directions;
and S5.2, basic searching, namely sequentially and respectively performing one-dimensional searching along n directions.
S5.3, accelerating search is carried out, an accelerating direction is selected, whether an allowable error is met or not is judged, if an error condition is met, iteration is terminated, and an optimal solution is output; otherwise jump S5.4 continues searching.
And S5.4, adjusting the search direction to form a new search direction, returning to S5.2 until iteration is terminated, and outputting the optimal space transformation parameters.
The method solves the registration problem of the multi-modal medical image by utilizing the gradient and gray level information of the image to be registered, comprehensively utilizes the gradient information and the generalized entropy similarity measure of the image, and measures the similarity between the reference image and the floating image; constructing a multi-modal medical image registration model by utilizing the spatial transformation of the image and combining the gradient information and the generalized entropy similarity measure; and solving the registration model by using the direction acceleration method, searching the spatial transformation relation of the image to be registered, and realizing the accurate registration of the medical image.
The invention has the advantages that:
(1) the invention comprehensively considers the gradient information and the gray distribution characteristics of medical images in different modes, and effectively completes the accurate registration of the multi-mode medical images.
(2) The method is based on a generalized information entropy, constructs generalized entropy similarity measure, and has wider properties.
(3) The method does not consider the gray difference between the images, but utilizes the joint probability distribution of the gray values of the two images to calculate the similarity of the information theory between the images to be registered, is suitable for the medical images in the same modality, is also suitable for the medical images in different modalities with larger gray difference, and enlarges the application range of the method.
(4) The invention does not need to carry out pretreatment such as feature extraction, segmentation and the like on the medical image to be registered, and has stronger self-adaptability and robustness.
Drawings
Fig. 1 is a reference image, in which fig. 1a is a Computed Tomography (CT) image and fig. 1b is a Positron Emission Tomography (PET) image.
Fig. 2 is a floating image, wherein fig. 2a is an MR T1 image, fig. 2b is an MR T2 image, and fig. 2c is an MR PD image.
Fig. 3 is a flow chart of multi-modality medical image registration of the present invention.
Fig. 4 is a result graph before registration of the CT and three MR images, wherein fig. 4a is a result graph before registration of the CT and MR T1 images, fig. 4b is a result graph before registration of the CT and MR T2 images, and fig. 4c is a result graph before registration of the CT and MR T3 images.
Fig. 5 is a result graph after registration of the CT and three MR images, wherein fig. 5a is a result graph after registration of the CT and MR T1 images, fig. 5b is a result graph after registration of the CT and MR T2 images, and fig. 5c is a result graph after registration of the CT and MR T3 images.
FIG. 6 is a graph of results before registration of PET and three MR images, wherein FIG. 6a is a graph of results before registration of PET and MR T1 images, FIG. 6b is a graph of results before registration of PET and MR T2 images, and FIG. 6c is a graph of results before registration of PET and MR T3 images.
FIG. 7 is a graph of the results of registration of PET and three MR images, wherein FIG. 7a is a graph of the results of registration of PET and MR T1 images, FIG. 7b is a graph of the results of registration of PET and MR T2 images, and FIG. 7c is a graph of the results of registration of PET and MR T3 images.
Detailed Description
As shown in fig. 3, a multi-modality medical image registration method fusing gradient information and generalized entropy similarity includes the following steps:
and S1, constructing a generalized entropy similarity measure.
S1.1, Arimoto entropy A is definedα
Figure BDA0001396415410000101
Wherein X is a discrete random variable, α is a parameter for controlling the non-expansibility of the Arimoto entropy, M is the number of elements of the discrete random variable X, i is the number of elements of the probability distribution P of the discrete random variable X, and PiIs the ith element of the probability distribution P.
The limit of Arimoto entropy when α → 1 is found to be equal to shannon entropy according to lobida's law, so shannon entropy can be considered as a special case of Arimoto entropy.
S1.2, construct Jensen-Arimoto divergence (JAD).
Figure BDA0001396415410000111
In the formula, Aα(. for) Arimoto entropy, ωiRepresents a weighting factor, and ωi≥0,∑ωiSince the limit of Arimoto entropy is shannon entropy when α tends towards 1, the limit of JAD is jensen shannon divergence when α → 1.
And S1.3, constructing generalized entropy similarity measure.
Figure BDA0001396415410000112
Wherein R (x) is a reference image; f is a floating image; f (T)μ(x) ) is the transformed floating image; t represents a spatial transformation; x is a pixel point of the reference image R; t isμ(x) Representing the transformed corresponding coordinate point, fjRepresenting the grey level, r, of the converted floating imageiRepresents the gray level of the reference image r (x); p (f)j|ri) Representing a conditional probability distribution of the transformed floating image given the known reference image; p (r)i) Representing a probability distribution of a reference image; p (f)j) Representing the probability distribution of the floating image under the action of the spatial transformation T; p (r)i,fj) Representing the joint probability distribution of the floating image and the reference image under the spatial transformation T.
S2, constructing a gradient measure of the image to be registered.
And S2.1, obtaining the gradient direction of the image to be registered.
S2.1.1 calculating the gradient of the reference image R
Figure BDA0001396415410000124
Figure BDA0001396415410000121
Wherein the content of the first and second substances,
Figure BDA0001396415410000125
representing the gradient of the reference image in the horizontal direction;
Figure BDA0001396415410000126
representing the gradient in the vertical direction of the reference image;
Figure BDA00013964154100001212
represents a convolution; gx' (σ) denotes the first derivative in the x-direction of the Gaussian kernel with the scale σ; gy' (σ) denotes the first derivative in the y-direction of the Gaussian kernel with the scale σ.
S2.1.2, computing the floating image F (T) under the action of the space transformation Tμ(x) Gradient of (2)
Figure BDA0001396415410000127
Figure BDA0001396415410000122
Wherein T represents a spatial transformation; t isμ(x) Representing the transformed corresponding coordinate points; mu is a space transformation parameter;
Figure BDA0001396415410000128
representing the gradient of the horizontal direction of the transformed floating image;
Figure BDA0001396415410000129
representing the gradient in the vertical direction of the transformed floating image.
S2.1.3 calculating the gradient of the reference image
Figure BDA00013964154100001210
And transforming the gradient of the floating image
Figure BDA00013964154100001211
The included angle between them;
Figure BDA0001396415410000123
wherein the content of the first and second substances,
Figure BDA0001396415410000138
representing a gradient
Figure BDA0001396415410000139
The mold of (4);
Figure BDA00013964154100001310
representing a gradient
Figure BDA00013964154100001311
The die of (1).
S2.1.4, calculating the direction of the image gradient to be registered.
Figure BDA0001396415410000131
And S2.2, obtaining a mode of the gradient of the image to be registered.
S2.2.1 calculating the gradient of the reference image
Figure BDA00013964154100001312
The mold of (4);
Figure BDA0001396415410000132
s2.2.2, calculating the gradient of the floating image under T transform
Figure BDA00013964154100001313
The die of (1).
Figure BDA0001396415410000133
S2.2.3, obtaining a modulus of the image gradient to be registered.
Figure BDA0001396415410000134
Where σ denotes the scale of the gaussian kernel and μ is a spatial transformation parameter.
S2.3, calculating the gradient measure of the image to be registered;
Figure BDA0001396415410000135
in the formula (I), the compound is shown in the specification,
Figure BDA0001396415410000136
which indicates the direction of the gradient and,
Figure BDA0001396415410000137
the modulus representing the gradient, ω being regarded as the weight of the modulus of the gradient, in the reference image R (x) and in the transformed floating image F (T)μ(x) ) the weighted sum of the pixels in the overlapping portion.
S3, constructing a similarity measure of the images to be registered.
S(R(x),F(Tμ(x)))=G(R(x),F(Tμ(x)))JAα(F(Tμ(x)),R(x)) (12);
Wherein G (R (x), F (T)μ(x) ) is in the spatial transformation TμThe gradient measurement of the floating image and the reference image under the action represents the gradient information between the images to be registered; JA (JA)α(F(Tμ(x) R (x)) represents the transformation T in spaceμMeasuring the generalized entropy similarity of the floating image and the reference image under the action;
s4, constructing a registration model of the multi-modal image;
Figure BDA0001396415410000141
substituting the similarity measure of the image to be registered into a formula 13 to obtain a final registration function;
Figure BDA0001396415410000142
s5, solving the registration model by adopting a directional acceleration method to obtain optimal space transformation parameters;
s5.1, initializing, and setting an initial point, a maximum iteration number, an allowable error and n linearly independent directions;
s5.2, basic searching, namely sequentially and respectively performing one-dimensional searching along n directions;
s5.3, accelerating search is carried out, an accelerating direction is selected, whether an allowable error is met or not is judged, if an error condition is met, iteration is terminated, and an optimal solution is output; otherwise, skipping S5.4 to continue searching;
and S5.4, adjusting the search direction to form a new search direction, returning to S5.2 until iteration is terminated, and outputting the optimal space transformation parameters.
The idea of the invention will be described in further detail below with reference to the accompanying drawings and comparative examples.
The invention provides a multi-modal medical image registration method based on gradient information and generalized entropy similarity, which comprises the steps of firstly researching the property of an Arimoto entropy, constructing generalized entropy similarity measure, estimating the joint probability distribution of two images and measuring the similarity between the two images; secondly, calculating gradients of the reference image and the floating image, and acquiring mode and direction information of the image gradient; then, establishing a registration model of the medical image by utilizing the spatial transformation of the image and combining the gradient information and the generalized entropy similarity; and finally, solving the model through an optimization algorithm, searching the optimal transformation relation among the images to be registered, and realizing the accurate registration of the medical images. The flow of the registration algorithm is shown in fig. 3, and specifically includes:
firstly, the method comprises the following steps: construction of generalized entropy similarity measure
Step 1. definition of generalized entropy
Assume a discrete random variable X with a probability distribution of P ═ P (P)1,p2,…,pM) Then the Arimoto entropy of X is defined as:
Figure BDA0001396415410000151
the limit of Arimoto entropy when α → 1 is equal to shannon entropy, so shannon entropy can be regarded as a special case of Arimoto entropy, according to lobida's law.
Step 2, defining Jensen-Arimoto divergence
Suppose X (X)1,x2,…,xM) Is a random variable with a probability distribution of P ═ P (P)1,p2,…,pM) Then the Jensen-Arimoto divergence (JAD) of the probability distribution P is defined as:
Figure BDA0001396415410000152
in the formula Aα(. for) Arimoto entropy, ωiRepresents a weighting factor, and ωi≥0,∑ωiSince the limit of Arimoto entropy is shannon entropy when α tends towards 1, the limit of JAD is jensen shannon divergence when α → 1.
Step 3, constructing generalized entropy similarity measure
For reference image R, floating image F and spatial transformation T, x is the pixel point of the reference image, Tμ(x) Corresponding to the transformed coordinate point, let f equal (f)1,f2,…,fM) And r ═ r (r)1,r2,…,rM) Respectively represent F (T)μ(x) And R (x).
In formula (2), let pi=pi(F(Tμ(x) R (x)), i ═ 1,2, …, M, which represents the conditional probability distribution of a floating image transformed on the premise of a known reference image, and which is rewritten as p for simplicityi=p(F=fj|R=ri)=p(fj|ri) And then using the probability distribution of the reference image as a weighting factor, i.e. ωi=p(R(x))=p(R=ri)=p(ri) A 1 is to piAnd ωiBy substituting the definition of the JAD into the JAD,
Figure BDA0001396415410000161
where JAD is used to scale the reference image R (x) and the floating image F (T)μ(x) Measure of similarity between).
II, secondly: computation of gradient information of image to be registered
The method specifically comprises the following steps:
step 1, calculating the horizontal direction and vertical direction gradients of an image to be registered
Given a reference image R, a floating image F and a spatial transformation T, x is an arbitrary coordinate point of the reference image, Tμ(x) Corresponding to the transformed point, mu is a spatial transformation parameter, and the gradient of the reference image R is
Figure BDA0001396415410000171
Computing a transformed floating image F (T)μ(x) ) of the gradient,
Figure BDA0001396415410000172
the subscripts h and v in formulae (4) and (5) represent the horizontal and vertical directions, respectively,
Figure BDA0001396415410000176
for convolution operations, G' (σ) represents the first derivative of the Gaussian kernel scaled by σ.
Step 2, estimating direction information of gradient
Calculating the included angle between the gradient vectors by the formulas (4) and (5),
Figure BDA0001396415410000173
in the formula
Figure BDA0001396415410000175
Representing the gradient vector at point x at the scale σ, |, is the modulus of the gradient.
For multi-modality images, different imaging techniques result in the same tissue or organ having different gray values, and thus the gray vectors of the two images may point in different directions. However, the anatomical structures described by different modality medical images are consistent, so that the gradient directions of corresponding points of the images to be registered are the same or opposite, and the included angle between the gradient vectors should be small (equal to 0) or tend to be pi.
Therefore, a weighting function is used to estimate the direction information,
Figure BDA0001396415410000174
when the included angle is equal to 0 or pi, the maximum value 1 is obtained by the above formula; when the angle is equal to 0.5 pi, the value of the above formula is 0.
Step 3, obtaining the module information of the gradient
Aiming at multi-modal medical image registration, only a part containing stronger gradient information in two images is interested in being calculated
Figure BDA0001396415410000185
And
Figure BDA0001396415410000186
the die of (a) is used,
Figure BDA0001396415410000181
Figure BDA0001396415410000182
in order to obtain stronger gradient information in the image to be registered, only maximization is required
Figure BDA0001396415410000187
And
Figure BDA0001396415410000188
the minimum value of the modulus, the modulus information of the gradient is then estimated,
Figure BDA0001396415410000183
where σ denotes the scale of the gaussian kernel and μ is a spatial transformation parameter.
Step 4, fusion of gradient mode information and direction information
In order to make full use of the gradient information of the image during the registration process, the direction of the gradient and the mode are combined,
Figure BDA0001396415410000184
where ω and M represent the direction and mode information of the gradient, ω being the weight of the gradient mode, at R (x) and F (T)μ(x) ) the weighted sum of the pixels in the overlapping portion. When the two images are perfectly aligned, equation (11) takes the maximum value.
Thirdly, the method comprises the following steps: establishment of multi-modal image registration model
The method specifically comprises the following steps:
step 1. construction of similarity measure
The method combines the generalized entropy similarity and the gradient information of the image to be registered to construct the final similarity measure,
S(R(x),F(Tμ(x)))=G(R(x),F(Tμ(x)))JAα(F(Tμ(x)),R(x)) (12);
in the formula G and JAαRespectively representing a transformation in space TμGradient measurement and generalized entropy similarity measurement of the floating image and the reference image under the action.
Step 2. establishment of registration framework
Given a reference image R, a floating image F and a spatial transformation T, x is an arbitrary coordinate point of the reference image, Tμ(x) For transformed points, μ is a spatial transformation parameter, then the registration of R and F can be considered as the following optimization problem:
Figure BDA0001396415410000191
wherein S represents a similarity measure and μ is when R (x) and F (T)μ(x) Full registration, the optimal spatial transformation parameter at which the similarity takes a maximum.
Substituting the similarity measure S into the formula (13) to obtain
Figure BDA0001396415410000192
The purpose of image registration is to require a solution to the above optimization problem, and to achieve accurate registration of the reference image and the floating image through the obtained optimal transformation parameters.
Fourthly, the method comprises the following steps: solution of registration model
Since the registration function shown in equation (14) is composed of two terms, a gradient measure and a generalized entropy similarity measure, it is difficult to determine the derivative of the registration function. Therefore, the registration function is optimized by adopting a Powell algorithm, which is also called a direction acceleration method and is a search method capable of accelerating the convergence speed by utilizing the conjugate direction. The algorithm does not need to calculate the derivative of the registration function, and is an effective direct search method. The optimization algorithm iterates repeatedly within the search space, performing a one-dimensional search for each dimension until the algorithm converges.
For three-dimensional medical image registration, if a rigid body transformation is selected as a space transformation model, the dimension of a search space is 6, and the search space comprises three translation amounts and three rotation angles.
The optimization algorithm comprises the following specific steps:
first, an initial point x needs to be set(0)N linearly independent directions d0, d1, …, dn-1, and an allowed error, let k equal to 0;
a second step of sequentially performing one-dimensional search in n directions with y0 being x1, and performing one-dimensional search in n directions with j being 1, 2.. times.n, f (y (j-1) + λ (j-1) d (j-1)) ═ min f (y (j-1) + λ d (j-1)), y (j) -1) + λ (j-1) d (j-1);
a third step of taking an acceleration direction d (n) ═ y (n) — y (0); if | | d (n) | <, iteration is terminated, and y (n) is obtained and is an approximate optimal solution of image registration; otherwise, starting from point y (n), performing one-dimensional search along d (n), obtaining λ (n) so that f (y) (n) + λ (n), d (n)) min f (y (n) + λ d (n)), and x (k +1) ═ y (n) + λ (n) d (n)), and going to the fourth step;
fourthly, removing d0 and adding d (n) in the original n directions d (0), d (1) to form a new search direction, and returning to the second step.
The following experiments were performed using clinically common medical images including CT images, PET images, three types of MR images, and the like. The sizes of the images were, CT: 512 × 512, MR: 256 × 256, PET: 128 x 128.
Selecting CT and PET as reference images, as shown in FIG. 1a and FIG. 1 b; three types of MR were selected as floating images, as shown in fig. 2 a-2 c.
The images shown in fig. 1 and 2 are registered using a method based on gradient information and generalized entropy similarity. In the experiment, the value of the parameter alpha in the generalized entropy similarity is 1.5, the number of histogram boxes for estimating the joint probability distribution is 64, the allowed precision in the Powell optimization algorithm is 0.01, and the maximum iteration number is 100. The grids before CT and MR T1, MR T2 and MRPD images are registered are shown in figures 4 a-4 c, and the grids after the registration by adopting the invention are shown in figures 5 a-5 c. Similarly, the results of the PET images and the three types of MR images before registration are shown in FIGS. 6 a-6 c, and the results after registration are shown in FIGS. 7 a-7 c. As can be seen from fig. 4-7, the present invention can obtain accurate registration results when registering medical images of different modalities.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (2)

1. A multi-modal medical image registration method fusing gradient information and generalized entropy similarity is characterized by comprising the following steps:
s1, constructing generalized entropy similarity measure;
in step S1, the specific steps are: s1.1, Arimoto entropy A is definedα
Figure FDA0002613029730000011
Wherein X is a discrete random variable, α is a parameter for controlling the non-expansibility of the Arimoto entropy, M is the number of elements of the discrete random variable X, i is the number of elements of the probability distribution P of the discrete random variable X, and PiIs the ith element of the probability distribution P;
s1.2, constructing Jensen-Arimoto divergence;
Figure FDA0002613029730000012
in the formula, Aα(. for) Arimoto entropy, ωiRepresents a weighting factor, and ωi≥0,∑ωi=1;
S1.3, constructing generalized entropy similarity measure;
Figure FDA0002613029730000013
wherein R (x) is a reference image; f is a floating image; f (T)μ(x) ) is the transformed floating image; t represents a spatial transformation; x is a pixel point of the reference image R; t isμ(x) Representing the transformed corresponding coordinate point, fjRepresenting the grey level, r, of the converted floating imageiRepresents the gray level of the reference image r (x); p (f)j|ri) Representing a conditional probability distribution of the transformed floating image given the known reference image; p (r)i) Representing a probability distribution of a reference image; p (f)j) Representing the probability distribution of the floating image under the action of the spatial transformation T; p (r)i,fj) Representing a joint probability distribution of the floating image and the reference image under the T transformation;
s2, constructing the gradient measure of the image to be registered;
s3, constructing similarity measure of the image to be registered;
S(R(x),F(Tμ(x)))=G(R(x),F(Tμ(x)))JAα(F(Tμ(x)),R(x)) (12);
wherein G (R (x), F (T)μ(x) ) represents the transformation T in spaceμGradient measurement of the floating image and the reference image under the action is used for expressing the gradient information of the image to be registered; JA (JA)α(F(Tμ(x) R (x)) represents the transformation T in spaceμMeasuring the generalized entropy similarity of the floating image and the reference image under the action;
s4, constructing a registration model of the multi-modal image;
Figure FDA0002613029730000021
s5, solving the registration model by adopting a directional acceleration method to obtain optimal space transformation parameters;
in step S2, the specific steps are: s2.1, obtaining the gradient direction of the image to be registered;
s2.2, obtaining a mode of the gradient of the image to be registered;
s2.3, calculating the gradient measure of the image to be registered;
Figure FDA0002613029730000022
in the formula (I), the compound is shown in the specification,
Figure FDA0002613029730000023
which indicates the direction of the gradient and,
Figure FDA0002613029730000024
is the modulus of the gradient, omega is the weight of the modulus of the gradient, in the reference image R (x) and the transformed floating image F (T)μ(x) ) weighted sum of pixels in the overlapping part;
in step S2.1, the specific steps are: s2.1.1 calculating the gradient of the reference image R
Figure FDA0002613029730000025
Figure FDA0002613029730000026
Wherein the content of the first and second substances,
Figure FDA0002613029730000027
representing the gradient of the reference image in the horizontal direction;
Figure FDA0002613029730000028
representing the gradient in the vertical direction of the reference image;
Figure FDA0002613029730000029
represents a convolution; g'x(σ) represents the first derivative in the x-direction of a gaussian kernel of scale σ; g'y(σ) represents the first derivative in the y-direction of a gaussian kernel of scale σ;
s2.1.2, calculating a transformed floating image F (T)μ(x) Gradient of (2)
Figure FDA00026130297300000210
Figure FDA00026130297300000211
Wherein T represents a spatial transformation; t isμ(x) Representing the transformed corresponding coordinate points; mu is a space transformation parameter;
Figure FDA00026130297300000212
representing the gradient of the horizontal direction of the transformed floating image;
Figure FDA00026130297300000213
representing the gradient of the vertical direction of the transformed floating image;
s2.1.3 calculating the gradient of the reference image
Figure FDA00026130297300000214
And transforming the gradient of the floating image
Figure FDA00026130297300000215
The included angle between them;
Figure FDA0002613029730000031
wherein the content of the first and second substances,
Figure FDA0002613029730000032
representing a gradient
Figure FDA0002613029730000033
The mold of (4);
Figure FDA0002613029730000034
representing a gradient
Figure FDA0002613029730000035
The mold of (4);
s2.1.4, calculating the gradient direction of the image to be registered;
Figure FDA0002613029730000036
in step S2.2, the specific steps are: s2.2.1 calculating the gradient of the reference image
Figure FDA0002613029730000037
The mold of (4);
Figure FDA0002613029730000038
s2.2.2, calculating the gradient of the floating image under T transform
Figure FDA0002613029730000039
The mold of (4);
Figure FDA00026130297300000310
s2.2.3, obtaining a module of the gradient of the image to be registered;
Figure FDA00026130297300000311
where σ denotes the scale of the gaussian kernel and μ is a spatial transformation parameter.
2. The multi-modal medical image registration method for fusing gradient information and generalized entropy similarity according to claim 1, wherein in step S5, the specific steps are as follows: s5.1, initializing, and setting an initial point, a maximum iteration number, an allowable error and n linearly independent directions;
s5.2, basic searching, namely sequentially and respectively performing one-dimensional searching along n directions;
s5.3, accelerating search is carried out, an accelerating direction is selected, whether an allowable error is met or not is judged, if an error condition is met, iteration is terminated, and an optimal solution is output; otherwise, skipping S5.4 to continue searching;
and S5.4, adjusting the search direction to form a new search direction, returning to S5.2 until iteration is terminated, and outputting the optimal space transformation parameters.
CN201710778551.4A 2017-09-01 2017-09-01 Multi-modal medical image registration method fusing gradient information and generalized entropy similarity Active CN107644434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710778551.4A CN107644434B (en) 2017-09-01 2017-09-01 Multi-modal medical image registration method fusing gradient information and generalized entropy similarity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710778551.4A CN107644434B (en) 2017-09-01 2017-09-01 Multi-modal medical image registration method fusing gradient information and generalized entropy similarity

Publications (2)

Publication Number Publication Date
CN107644434A CN107644434A (en) 2018-01-30
CN107644434B true CN107644434B (en) 2020-09-25

Family

ID=61110431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710778551.4A Active CN107644434B (en) 2017-09-01 2017-09-01 Multi-modal medical image registration method fusing gradient information and generalized entropy similarity

Country Status (1)

Country Link
CN (1) CN107644434B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053431A (en) * 2018-02-24 2018-05-18 中原工学院 A kind of non-rigid medical image registration method based on gradient distribution
CN111754554B (en) * 2020-06-28 2023-09-15 上海应用技术大学 Craniocerebral multi-modal medical image registration method
CN113920414B (en) * 2021-12-14 2022-07-15 北京柏惠维康科技有限公司 Method for determining similarity between images, and method and device for fusing images
CN115049568B (en) * 2022-06-06 2024-03-22 北京工业大学 Method for characterizing biological tissue based on fusion of ultrasonic information entropy image and homodyne K distribution alpha parameter image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593351A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 Ocular fundus image registration method based on range conversion and rigid transformation parameters estimation
CN102446358A (en) * 2012-01-17 2012-05-09 南京航空航天大学 Multi-mode medical image registration method based on edge features and CS (Cauchy-Schwarz) information
CN102622759A (en) * 2012-03-19 2012-08-01 苏州迪凯尔医疗科技有限公司 Gray scale and geometric information combined medical image registration method
CN103886586A (en) * 2014-02-18 2014-06-25 南京邮电大学 Medical image registration method based on combination of mutual information and gradient information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593351A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 Ocular fundus image registration method based on range conversion and rigid transformation parameters estimation
CN102446358A (en) * 2012-01-17 2012-05-09 南京航空航天大学 Multi-mode medical image registration method based on edge features and CS (Cauchy-Schwarz) information
CN102622759A (en) * 2012-03-19 2012-08-01 苏州迪凯尔医疗科技有限公司 Gray scale and geometric information combined medical image registration method
CN103886586A (en) * 2014-02-18 2014-06-25 南京邮电大学 Medical image registration method based on combination of mutual information and gradient information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A New Divergence Measure Based on Arimoto Entropy for Medical Image Registration;Bicao Li等;《2014 22nd International Conference on Pattern Recognition》;20140828;第3197-3202页 *

Also Published As

Publication number Publication date
CN107644434A (en) 2018-01-30

Similar Documents

Publication Publication Date Title
Ferrante et al. Slice-to-volume medical image registration: A survey
CN107644434B (en) Multi-modal medical image registration method fusing gradient information and generalized entropy similarity
US8165361B2 (en) System and method for image based multiple-modality cardiac image alignment
Holden A review of geometric transformations for nonrigid body registration
US7715654B2 (en) System and method for fast multimodal registration by least squares
Nag Image registration techniques: a survey
Wang et al. A review of deformation models in medical image registration
Khader et al. An information-theoretic method for multimodality medical image registration
JP2009520558A (en) Point-based adaptive elasticity image registration
Chowdhury et al. Concurrent segmentation of the prostate on MRI and CT via linked statistical shape models for radiotherapy planning
Vásquez Osorio et al. Accurate CT/MR vessel‐guided nonrigid registration of largely deformed livers
Daly et al. Multimodal medical image registration based on a hybrid optimization strategy
Wan et al. Multi-target landmark detection with incomplete images via reinforcement learning and shape prior embedding
Queirós et al. Fast left ventricle tracking in CMR images using localized anatomical affine optical flow
Lötjönen et al. Four-chamber 3-D statistical shape model from cardiac short-axis and long-axis MR images
June et al. Fast and accurate rigid registration of 3D CT images by combining feature and intensity
Rao et al. Non-rigid registration of multi-modality medical image using combined gradient information and mutual information
Chen et al. Multi-resolution statistical shape models for multi-organ shape modelling
Boukellouz et al. Evaluation of several similarity measures for deformable image registration using T1-weighted MR images of the brain
Du et al. HNSF Log‐Demons: Diffeomorphic demons registration using hierarchical neighbourhood spectral features
Zhang et al. Model-based nonrigid image registration using scale-invariant features
Menon et al. Applicability of non-rigid medical image registration using moving least squares
Zhang et al. A multiscale adaptive mask method for rigid intraoperative ultrasound and preoperative CT image registration
Zhang et al. Efficient sparse shape composition with its applications in biomedical image analysis: An overview
CN113192014B (en) Training method and device for improving ventricle segmentation model, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 451191 No. 1 Huaihe Road, Shuang Hu Economic and Technological Development Zone, Xinzheng, Zhengzhou, Henan

Applicant after: Zhongyuan University of Technology

Address before: 451191 No. 1 Huaihe Road, Shuanghu Town Economic and Technological Development Zone, Zhengzhou City, Henan Province

Applicant before: Zhongyuan University of Technology

CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Bicao

Inventor after: Liu Zhoufeng

Inventor after: Wang Bei

Inventor after: Zhang Aihua

Inventor after: Huang Jie

Inventor after: Shu Huazhong

Inventor after: Zhu Yongsheng

Inventor after: Liu Shanliang

Inventor before: Li Bicao

Inventor before: Liu Zhoufeng

Inventor before: Wang Bei

Inventor before: Zhang Aihua

Inventor before: Huang Jie

Inventor before: Shu Huazhong

GR01 Patent grant
GR01 Patent grant