Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a marker registration method based on a tetrahedral structure, which is used to solve the technical problems in the prior art that the requirement on the registration sequence of markers is high and the registration accuracy is low.
According to an aspect of the embodiments of the present invention, there is provided a registration method for a marker based on a tetrahedral structure, the method including:
respectively acquiring an image space data set and a target space data set, wherein the image space data set comprises image space mark point information of a three-dimensional virtual image space, and the target space data set comprises target space mark point information of a patient operation target space;
selecting four marking points in the image space data set as vertexes to form an image space tetrahedral model, and selecting corresponding four marking points in the target space data set as vertexes to form a target space tetrahedral model;
determining a scaling of the image space data set relative to the target space data set based on the image space tetrahedral model and the target space tetrahedral model;
zooming the image space mark points in the image space data set according to the zooming proportion to obtain a zoomed image space data set;
calculating the correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set respectively so as to determine the sequence among the image space mark points in the zoomed image space data set, the sequence among the target space mark points and the corresponding relationship among the image space mark points in the zoomed image space data set and the target space mark points in the target space data set;
according to the sequence among all image space mark points, the sequence among all target space mark points and the corresponding relation between all image space mark points in the zoomed image space data set and all target space mark points in the target space data set, carrying out rigid transformation on the image space tetrahedral model and the target space tetrahedral model, and determining an image translation matrix and an image rotation matrix of the image space mark points relative to an image space coordinate system and a target translation matrix and a target rotation matrix of the target space mark points relative to a target space coordinate system;
and determining a conversion matrix according to the corresponding relation, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix and the target rotation matrix, and registering each image space mark point in the image space data set and each target space mark point corresponding to the target space data set according to the conversion matrix to obtain a registered image space data set.
In an optional manner, after determining a transformation matrix according to the correspondence, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix, and the target rotation matrix, the method further includes:
acquiring the error range of the image space mark point and the target space mark point on the transformed image space after each image space mark point is transformed by the transformation matrix;
constructing an optimization function according to the error range;
and optimizing the conversion matrix according to the optimization function to obtain the optimized conversion matrix.
In an alternative mode, the image space tetrahedron includes an image triangle formed by three image space marker points, and the target space tetrahedron corresponds to a target triangle formed by three target space marker points;
determining a scaling of the image space data set relative to the target space data set from the image space tetrahedral model and the target space tetrahedral model, further comprising:
determining a normal vector and circumscribed circle center coordinates of the image triangle according to the coordinate information of the image space mark points and the triangle theorem;
according to the coordinate information of the target space mark point, the triangle theorem, the normal vector of the target triangle and the coordinates of the circumcircle center;
determining the radius of a circumscribed circle of the image triangle according to the normal vector and the coordinates of the center of the circumscribed circle of the image triangle, and determining the radius of the circumscribed circle of the target triangle according to the normal vector and the coordinates of the center of the circumscribed circle of the target triangle;
and determining the scaling according to the radius of the circumscribed circle of the image triangle and the radius of the circumscribed circle of the target triangle.
In an optional manner, calculating a correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set, respectively, to determine an order between each image space mark point in the zoomed image space data set, an order between each target space mark point, and a correspondence between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set, further includes:
respectively calculating image multidimensional distance vectors of each image space mark point and each other image space mark point in the zoomed image space data set and target multidimensional distance vectors of each target space mark point and each other target space mark point in the zoomed image space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set;
and determining the sequence between the image space mark points in the zoomed image space data set, the sequence between the target space mark points, and the corresponding relation between the image space mark points in the zoomed image space data set and the target space mark points in the target space data set according to the correlation between the image multi-dimensional distance vector and the target multi-dimensional distance vector.
In an optional manner, according to an order between the image space marker points, an order between the target space marker points, and a correspondence relationship between each image space marker point in the scaled image space data set and each target space marker point in the target space data set, rigidly transforming the image space tetrahedral model and the target space tetrahedral model, and determining an image translation matrix and an image rotation matrix of the image space marker points relative to an image space coordinate system, and a target translation matrix and a target rotation matrix of the target space marker points relative to a target space coordinate system, further include:
moving the center of a circumscribed circle of the image triangle to the origin of an image space coordinate system to obtain an image translation matrix;
rotating the image triangle around the x axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the x axis of the image space coordinate system;
rotating the image triangle around the y axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the y axis of the image space coordinate system;
and according to the corresponding relation, rotating the image triangle around the z axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the z axis of the image space coordinate system.
In an optional manner, according to an order among the image space marker points, an order among the target space marker points, and a correspondence between each image space marker point in the scaled image space data set and each target space marker point in the target space data set, the image space tetrahedral model and the target space tetrahedral model are subjected to rigid transformation to determine an image translation matrix and an image rotation matrix of the image space marker point relative to an image space coordinate system, and a target translation matrix and a target rotation matrix of the target space marker point relative to a target space coordinate system, further including:
moving the center of a circumscribed circle of the target triangle to the origin of a target space coordinate system to obtain a target translation matrix;
rotating the target triangle around the x axis of the target space coordinate system, and calculating to obtain a rotation matrix rotating around the x axis of the target space coordinate system;
and rotating the target triangle around the y axis of the target space coordinate system, and calculating to obtain a rotation matrix rotating around the y axis of the target space coordinate system.
According to another aspect of the embodiments of the present invention, there is provided a marker point registration apparatus based on a tetrahedral structure, including:
the system comprises a data set acquisition module, a data set acquisition module and a data set acquisition module, wherein the data set acquisition module is used for respectively acquiring an image space data set and a target space data set, the image space data set comprises image space mark point information of a three-dimensional virtual image space, and the target space data set comprises target space mark point information of a patient operation target space;
the model construction module is used for selecting four mark points in the image space data set as vertexes to form an image space tetrahedral model, and selecting corresponding four mark points from the target space data set as vertexes to form a target space tetrahedral model;
a scaling determination module for determining a scaling of the image space data set relative to the target space data set based on the image space tetrahedral model and the target space tetrahedral model;
the zooming module is used for zooming the image space mark points in the image space data set according to the zooming proportion to obtain a zoomed image space data set;
a corresponding relation determining module, configured to calculate, according to the information of the image space mark points in the zoomed image space data set and the information of the target space mark points, a correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set, so as to determine a sequence between each image space mark point in the zoomed image space data set, a sequence between each target space mark point, and a corresponding relation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set;
a rigid transformation module, configured to perform rigid transformation on the image space tetrahedral model and the target space tetrahedral model according to a sequence between the image space marker points, a sequence between the target space marker points, and a correspondence between each image space marker point in the zoomed image space data set and each target space marker point in the target space data set, and determine an image translation matrix and an image rotation matrix of the image space marker points relative to an image space coordinate system, and a target translation matrix and a target rotation matrix of the target space marker points relative to a target space coordinate system;
and the conversion matrix determining module is used for determining a conversion matrix according to the corresponding relation, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix and the target rotation matrix so as to register each image space mark point in the image space data set with each corresponding target space mark point in the target space data set according to the conversion matrix to obtain a registered image space data set.
According to another aspect of the embodiments of the present invention, there is provided a computer apparatus including:
the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation of the marker point registration method based on the tetrahedral structure.
According to another aspect of the embodiments of the present invention, a surgical navigation system is provided, which includes the above-mentioned marker point registration device based on a tetrahedral structure.
According to a further aspect of the embodiments of the present invention, there is provided a computer-readable storage medium, in which at least one executable instruction is stored, and the executable instruction when running on a computer device/apparatus causes the computer device/apparatus to perform the operations of the above-mentioned tetrahedral structure based marker registration method.
According to the embodiment of the invention, the image space mark points and the target space mark points are selected to construct the tetrahedral structure, the constructed tetrahedral structure is utilized to realize automatic matching of the sequence of the data point set, and the triangular relation is combined to obtain the conversion matrix, so that the matching operation process is optimized, and the beneficial effects that the matching sequence of the data point set is not required, and the matching precision of the mark points is effectively improved are achieved.
Furthermore, the error factors are taken into consideration, an optimization function is constructed, and a multi-point optimization conversion matrix is carried out, so that a high-precision conversion matrix is obtained, the mapping precision is improved, and the error is reduced.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
Fig. 1 shows a flowchart of an embodiment of the tetrahedral structure based marker registration method of the present invention, which is performed by a tetrahedral structure based marker registration apparatus. As shown in fig. 1, the method comprises the steps of:
step 110: respectively acquiring an image space data set and a target space data set, wherein the image space data set comprises image space mark point information of a three-dimensional virtual image space, and the target space data set comprises target space mark point information of a patient operation target space.
Wherein the image space data set P img The medical image data is acquired and established in a three-dimensional virtual image space. Target spatial data set P tar Marking point information for a target space in a surgical target space of a patient, namely selecting a corresponding marking point set at an affected part of the patient by using a surgical instrument in a surgical operation process.
The image space marking point information comprises coordinates of the image space marking points in an image space coordinate system. The target space mark point information comprises coordinates of the target space mark point in a world coordinate system.
In the embodiment of the invention, corresponding marking points are selected in the three-dimensional virtual image model to form an image space data set, then a corresponding marking point set is selected at the affected part of the patient by using a surgical instrument to form a target space data set, and the marking point sets are recorded, wherein the sequence of the marking points is not corresponding.
Step 120: selecting four mark points in the image space data set as vertexes to form an image space tetrahedral model, and selecting corresponding four mark points from the target space data set as vertexes to form a target space tetrahedral model.
Wherein, the embodiment of the invention divides the image space data set into an image space tetrahedral frame part F Ting And an image space particle swarm portion F Pimg (ii) a Partitioning a target space dataset into target space tetrahedral frame portions F Ttar And a target space particle swarm section F Ptar . Image space particlesGroup part F Pimg And a target space particle swarm section F Ptar Can be considered as randomly distributed discrete particles.
Image space tetrahedral frame portion F Timg By selecting four image space coordinate points in the image space dataset. In an embodiment of the invention, the image space tetrahedral frame part F Timg The vertices of (a) are the first four elements in the image space dataset, i.e., the first four image space coordinate points. Wherein the first element (i.e. the first image space marker point) is called the hibernation window. The other three elements (the other three image space marking points selected as the vertexes) are called image space base points, and the plane formed by the image space base points is called an image space structural plane.
Target space tetrahedral frame portion F Ttar The method comprises the steps of selecting four target space coordinate points in a target space data set. In the embodiment of the invention, the target space tetrahedral frame part F Ttar The vertices of (a) are also the first four elements in the target spatial data set, i.e., the first four target spatial coordinate points. Wherein the first element (i.e. the first image space marker point) is called a hibernation window. The other three elements (the other three image space marking points selected as the vertexes) are called target space basic points, and the plane formed by the target space basic points is called a target space structural plane.
As shown in FIG. 2, S is the sleep window of the tetrahedral frame part of the target space, A T ,B T ,C T The base points of the target spatial data set, i.e. the remaining 3 target spatial landmark points that form part of the target spatial tetrahedral framework. A. The I ,B I ,C I The points are labeled for the base point of the image space dataset, i.e. for the remaining 3 image spaces that form part of the tetrahedral framework of the image space. Wherein A is T ,B T ,C T And A I ,B I ,C I The formed triangle can not be an isosceles triangle or an equilateral triangle. A is prepared from I ,B I ,C I Are respectively marked as P 1 (x 1 ,y 1 ,z 1 ),P 2 (x 2 ,y 2 ,z 2 ),P 3 (x 3 ,y 3 , z 3 ) The formed triangle is an image triangle P 1 P 2 P 3 A is T ,B T ,C T Are respectively marked as N 1 (x 1 ,y 1 ,z 1 ),N 2 (x 2 , y 2 ,z 2 ),N 3 (x 3 ,y 3 ,z 3 ) The formed triangle is the target triangle N 1 N 2 N 3 . Wherein the matching order of the base points, P, is not taken into account here 1 (x 1 ,y 1 ,z 1 ) Possibly with N 1 (x 1 ,y 1 ,z 1 ),N 2 (x 2 ,y 2 ,z 2 ),N 3 (x 3 ,y 3 ,z 3 ) Any one point in the map corresponds to any one point in the map.
Step 130: determining a scaling of the image space data set relative to the target space data set based on the image space tetrahedral model and the target space tetrahedral model.
Specifically, an image triangle P formed by image base points in the image space tetrahedral model is assumed
1 P
2 P
3 The center of the circumscribed circle is C
P Normal vector is
Radius of circumscribed circle of R
P . From trigonometric theorem (equation 1):
thus, the normal vector of the image triangle is:
normalizing the normal vector to obtain:
obtaining the normal vector of the image triangle according to the formula (1) and the formula (3)
From the normal vector of the image triangle
And image space mark point P of image triangle
1 Finding the triangle P of the image
1 P
2 P
3 Equation of the plane in which:
f(x,y,z)=0 (4)
according to the characteristics of the triangle circumscribed circle, the distances from the circle center of the circumscribed circle to each vertex of the triangle are the same, and the following results are obtained:
center C of circumscribed circle
P The above plane equation needs to be satisfied. Therefore, the coordinate C of the center of the circumscribed circle of the image triangle can be calculated and obtained through the formula (4) and the formula (5)
P = (l, m, n). Thus, the radius of the circumscribed circle of the image triangle is:
in the same way as above, the target triangle N is obtained
1 N
2 N
3 C of the circumscribed circle
N = (l ', m ', n '), normal vector
And radius of circumscribed circle
After the radius of the circumscribed circle of the image triangle and the radius of the circumscribed circle of the target triangle are obtained, determining a scaling ratio, namely a scaling parameter k:
k=R n /R p (6)
thereby finally determining the scaling k of the image space data set with respect to the target space data set.
Step 140: and zooming the image space mark points in the image space data set according to the zooming proportion to obtain a zoomed image space data set.
After the scaling k of the image space data set relative to the target space data set is determined, multiplying any marking point on the graphic triangle by the scaling k according to the scaling k, thereby realizing the scaling of the coordinates of the image space marking point on the image triangle, and enabling the scaled coordinates to be consistent with the coordinates of the image space marking point on the target triangle.
The coordinates P (x, y, z) of the image triangle are converted to P '(x', y ', z'), i.e.:
P′=k·P
at this time, 3 image space mark points P constituting an image triangle 1 ,P 2 ,P 3 And the corresponding circumcircle center C P Respectively convert into P 1 ′,P 2 ′,P 3 ' and C P ′。
Similarly, the image space mark points in the image space data set are respectively scaled according to the scaling, so as to obtain the scaled image space data set. Before the scaling transformation is carried out, the image space data set and the target data set are subjected to similar transformation, and after the scaling transformation is carried out, the data proportion of the image space mark points in the scaled image space data set and the data proportion of the target space mark points in the target space data set are the same, so that only rigid transformation is needed. That is, after scaling, the similarity transformation is converted to a rigid transformation.
Step 150: and calculating the correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set respectively so as to determine the sequence among the image space mark points in the zoomed image space data set, the sequence among the target space mark points and the corresponding relationship among the image space mark points in the zoomed image space data set and the target space mark points in the target space data set.
In the embodiment of the present invention, determining a correspondence between each image space marker point in the zoomed image space data set and each target space marker point in the target space data set specifically includes the following steps:
step 1501: and respectively calculating the image multi-dimensional distance vector between each image space mark point and each other image space mark point in the zoomed image space data set and the target multi-dimensional distance vector between each target space mark point and each other target space mark point in the zoomed image space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set.
Since the distance between any two points is unchanged before and after rigid transformation, the distances between each image space mark point of the zoomed image space data set and the rest image space mark points are calculated in an image space coordinate system according to the property, the distances are combined into image multi-dimensional distance vectors, and the obtained image multi-dimensional distance vectors are arranged in the order from large to small.
In particular, an image space marker point m in the scaled image space dataset is calculated i Distances from the remaining individual image space markers in the scaled image space data set, which distances may be denoted as S (m) i ,m 1 ),S(m i ,m 2 ),...,S(m i ,m L ) Wherein L is the number of image space marking points in the image space data set.
Wherein m is i And m j S (m) between i ,m j ) Comprises the following steps:
arranging S (mi, mj) in the order from small to large to obtain an image space mark point m i Image multi-dimensional distance vector S i =(S i1 ,S i2 ....S iL )。
Meanwhile, the distance between each target space mark point in the target space data set and each of the rest target space mark points is calculated in a world coordinate system, the distances form a target multi-dimensional distance vector, and the obtained plurality of target multi-dimensional distance vectors are arranged from large to small.
Specifically, a target spatial marker point m in the scaled target spatial data set is calculated i Distances from the remaining individual target spatial markers in the scaled target spatial dataset, which may be represented as N (m) i ,m 1 ),N(m i ,m 2 ),...,N(m i ,m L ) And L is the number of the target space marking points in the target space data set.
Wherein m is i And m j Distance between N (m) i ,m j ) Comprises the following steps:
arranging N (mi, mj) in the order from small to large to obtain a target space mark point m i Target multi-dimensional distance vector N i =(N i1 ,N i2 ....N iL )。
Step 1502: and determining the sequence between the image space mark points in the zoomed image space data set, the sequence between the target space mark points, and the corresponding relation between the image space mark points in the zoomed image space data set and the target space mark points in the target space data set according to the correlation between the image multi-dimensional distance vector and the target multi-dimensional distance vector.
Specifically, after acquiring the multidimensional distance vector of the image, determining an image space coordinate point m i And the image space coordinate point m j Is calculated by the correlation coefficient p (S) between the image multi-dimensional distance vectors of (a) i ,S j ) Comprises the following steps:
by a correlation coefficient ρ (S) i ,S j ) To determine an image space coordinate point mi and an image space coordinate point m j To ultimately determine the order between the L image space coordinate points in the image space data set.
After the target multi-dimensional distance vector is obtained, the target spatial coordinate point mi and the target spatial coordinate point m j Is calculated by the correlation coefficient p (N) between the target multi-dimensional distance vectors of (1) i ,N j ) Comprises the following steps:
by a correlation coefficient ρ (N) i ,N j ) For determining a target spatial coordinate point m i And a target space coordinate point m j To finally determine the order between the L target spatial coordinate points in the target spatial data set.
After obtaining the multidimensional distance vector of each image in the zoomed image space data set and the multidimensional distance vector of each target in the target space data set, calculating the similarity between the multidimensional distance vector of each image and the multidimensional distance vector of each target, thereby obtaining the corresponding relation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set. Wherein, L image spaces under the image space coordinate systemThe multidimensional distance vectors of the image corresponding to the marking points are respectively S 1 S 2 ...S N The multidimensional target distance vectors corresponding to the L target space mark points in the world coordinate system (operation space coordinate system) are respectively D 1 D 2 ...D N Thus, the correlation coefficient between the image multidimensional distance vector of the jth image space marker and the target multidimensional distance vector of the ith target space marker is ρ (D) i ,S j ). With the correlation coefficient ρ (D) i ,S j ) Forming an L x L matrix for elements, finding out the maximum value of a correlation coefficient column by column so as to obtain a target space mark point m under a world coordinate system i Image space mark point m under corresponding image space coordinate system j 。
Step 160: and performing rigid transformation on the image space tetrahedral model and the target space tetrahedral model according to the sequence among the image space marking points, the sequence among the target space marking points and the corresponding relationship among the image space marking points in the zoomed image space data set and the target space marking points in the target space data set, and determining an image translation matrix and an image rotation matrix of the image space marking points relative to an image space coordinate system and a target translation matrix and a target rotation matrix of the target space marking points relative to a target space coordinate system.
In the rigid transformation, only the position (translational transformation) and orientation (rotational transformation) of the object are changed, but the shape is not changed, and the obtained transformation is called rigid transformation. In the embodiment of the invention, the image space tetrahedral model and the target space tetrahedral model are subjected to rigid transformation, and an image translation matrix and an image rotation matrix of the image space mark point relative to an image space coordinate system and a target translation matrix and a target rotation matrix of the target space mark point relative to a target space coordinate system are obtained in the rigid transformation process.
Specifically, step 1601: and calculating an image translation matrix of the image space mark points relative to an image space coordinate system and a target translation matrix of the target space mark points relative to a target space coordinate system.
The center C of a circumcircle of the image triangle is respectively determined in the image triangle of the set image space tetrahedral model and the target triangle in the target space tetrahedral model P (l, m, n) and the center C of the circumscribed circle of the target triangle N (l ', m ', n '). However, since the image space coordinate system does not correspond to the world coordinate system, C P (l, m, n) and C N (l ', m ', n ') are not at the same distance from the origin of the coordinate system, i.e. the positions of the image triangle and the target triangle in the corresponding coordinate system are not identical.
Therefore, when the image space tetrahedral model and the target space tetrahedral model are subjected to rigid transformation, the center C of the circumscribed circle of the image triangle is determined P (l, m, n) moving to the origin of the image space coordinate system, and centering C on the circumscribed circle of the target triangle N (l ', m ', n ') to an origin point in the world coordinate system, so that the positions of the image triangle and the target triangle in the coordinate system are relatively uniform.
Thereby obtaining an image translation matrix of the image triangle as a homogeneous transformation matrix T m :
Translating the matrix T according to the image m Translating coordinates of the image space marker points in the zoomed image space dataset to obtain a translated image space dataset P ', wherein P ' = T ' m ·P′。
Based on the same mode, the center C of a circumscribed circle of a target triangle in the world coordinate system N (l ', m ', n ') translating to the original point, wherein the target translation matrix is a homogeneous transformation matrix T m ′:
According to the target translationMatrix T m ' translating coordinates of image space marker points in the scaled image space dataset, resulting in a translated image space dataset N ', wherein N ' = T m ′·N。
The translation matrix of the image space dataset with respect to the target space dataset is: t is a unit of m ·(T m ′) -1 。
Step 1602: and calculating an image rotation matrix of the image space mark points relative to the image space coordinate system and a target rotation matrix of the target space mark points relative to the target space coordinate system.
The image rotation matrix comprises a rotation matrix T of the image space dataset around the x-axis of the image space coordinate system x And the rotation matrix rotating around the y axis of the image space coordinate system is T y And a rotation matrix rotating around the z-axis of the image space coordinate system is T z . The target rotation matrix comprises a rotation matrix T of the target spatial data set around the x-axis of the world coordinate system x ' and the rotation matrix rotating around the y axis of the world coordinate system is T y ′。
When rigid transformation is carried out on the image space tetrahedral model, the plane of the image triangle needs to be rotated to be vertical to the z axis, and then the image triangle and the target triangle are completely overlapped by rotating around the z axis. The process comprises two steps: the rotation is first around the x-axis of the image space coordinate system and then around the y-axis of the image space coordinate system. Wherein the rotation matrix rotating around the x-axis of the image space coordinate system is T x :
Wherein the included angle theta is a normal vector of the image triangle
The vector projected onto the YOZ plane of the image space coordinate system forms an angle with the z-axis, which is the angle to be rotated around the x-axis. The calculation formula of the cosine cos theta and the sine sin theta corresponding to the included angle theta is as follows:
the rotation matrix around the y-axis of the image space coordinate system is T y :
Wherein the included angle
Is to use a normal vector
The included angle between the vector obtained by projecting the vector onto the XOZ plane of the image space coordinate system and the z axis is the angle required to rotate around the y axis. The included angle
Corresponding cosine
And sine
The calculation formula of (2) is as follows:
N′ Pz =N Py ·sinθ+N Px ·cosθ
the image triangle is made parallel to the XOY plane of the image space coordinate system by rotation about the x-axis and rotation about the y-axis. After the translation conversion of the translation matrix, the center of the circumscribed circle of the image triangle is at the origin of the image space coordinate system, so that the image triangle is on the XOY plane of the image space coordinate system after the rotation around the x-axis and the rotation around the y-axis.
Correspondingly, the target triangle also needs to rotate around the x-axis and the y-axis of the world coordinate system, and the specific calculation mode is the same as the calculation principle of the image triangle, which is not described herein again. Thereby obtaining a rotation matrix T of the target triangle rotating around the x axis of the world coordinate system x ', rotation matrix T rotated about the y-axis of the world coordinate system y ' rotated so that the target triangle is on the XOY plane of the world coordinate system.
After the rotation around the x axis and the rotation around the y axis, the image triangle and the target triangle are both on the XOY plane of the corresponding coordinate system, but the positions of the three marked points of the image triangle and the target triangle are not corresponding, so that the image triangle needs to be rotated relative to the z axis of the image space coordinate system, and the image triangle and the target triangle can be completely overlapped.
Since the sequence and the corresponding relationship of each image space mark point and the target image space mark point are obtained, the corresponding relationship between the image space mark point on the image triangle and the target space mark point on the target triangle can be obtained at this time. According to the corresponding relation between the image space mark point on the image triangle and the target space mark point on the target triangle, the image triangle rotates around the z-axis of the image space coordinate system, and the rotation matrix T of the image triangle around the z-axis z Comprises the following steps:
triangulating the image by the rotation matrix T about the z-axis z Thereby causing each image space marker point of the image triangle to rotate toThe mark points are consistent with all target space mark points of the target triangle, so that the image triangle and the target triangle are completely overlapped.
Thereby obtaining a rotation matrix T for the image space dataset rotated around the x-axis of the image space coordinate system x And a rotation matrix rotating around the y-axis of the image space coordinate system is T y And a rotation matrix T rotating around the z-axis of the image space coordinate system z And the rotation matrix of the target spatial data set around the x-axis of the world coordinate system is T x ' and the rotation matrix rotating around the y axis of the world coordinate system is T y '. Thus, the translational rotated image spatial dataset P' "and the translational rotated target spatial dataset N" are:
P″′=P·k·T m ·T x ·T y ·T z
N″=N·T m ′·T x ′·T y ′
step 170: and determining a conversion matrix according to the corresponding relation, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix and the target rotation matrix, and registering each image space mark point in the image space data set and each target space mark point corresponding to the target space data set according to the conversion matrix to obtain a registered image space data set.
And scaling, the image translation matrix, the image rotation matrix, the target translation matrix and the target rotation matrix according to the steps. Thus, the transformation matrix TP is determined as:
T P =(T m ′) -1 ·(T x ′) -1 ·(T y ′) -1 ·T m ·T x ·T y ·T z ·k
root the transformation matrix T P For mapping a set of image space data to a set of target space data, by means of the transformation matrix T P The target spatial data set is completely mapped to the image spatial data set, i.e.:
P tar =T P ·P img
in the embodiment of the invention, after the conversion matrix TP is obtained, the conversion matrix is optimized. And obtaining the error ranges of the image space mark points and the target space mark points on the transformed image space after the transformation of each image space mark point through the transformation matrix, constructing an optimization function according to the error ranges, and optimizing the transformation matrix according to the optimization function to obtain the optimized transformation matrix.
Specifically, the method comprises the following steps: due to the problems of the precision of the equipment and the like, when the image space data set is converted into the mapping which is consistent with the target space data set, a certain error exists on x, y, z and gamma, so that each particle (an image space mark point in the image space data set) can move on an XOY plane to optimize the position of the particle, and therefore, some interference (namely an error range) is artificially added to the parameters (x, y, z and gamma), the parameters are changed from constants to variables, and all data in the image space data set are finely adjusted according to the parameters when corresponding to the target space data set.
In the embodiment of the invention, the interference on the x axis is deltax, the interference on the y axis is deltay, and the interference on the z axis is deltaz. δ x and δ y satisfy the following equations:
δx 2 +δy 2 ≤s 2
the value of s depends on the accuracy of the registration device, defined in the embodiments of the present invention as
The error δ z in the z-axis is ± 1mm, and since the error δ γ in the rotation angle γ is particularly small, the trigonometric function can be simplified according to the extremely simple theorem. Therefore, the matrix T is optimized according to the above-mentioned error ranges of these parameters
0 Comprises the following steps:
thus, the image space dataset, when converted into a mapping of the target space dataset, adds an optimization matrix:
P tar ′=T 0 ·T P ·P img
transforming the above formula into an inequality, and using the inequality as an optimization function, wherein the optimization function is as follows:
f(δx,δy,δz,δγ)=min||P tar -T 0 ·T P ·P img ||
through the optimization function, P is calculated tar -T 0 ·T P ·P img Minimum T 0 Optimizing the matrix T 0 As the optimal solution T 0m So as to obtain the optimized conversion matrix as follows: t is a unit of Pm =T 0m ·T P
In the embodiment of the invention, the target function is optimized by the particle swarm optimization algorithm, so that the optimal target function and high optimization speed are obtained, and finally, the local optimal solution is found.
According to the embodiment of the invention, the tetrahedral structure is constructed by selecting the image space mark points and the target space mark points, the constructed tetrahedral structure is utilized to realize automatic matching of the sequence of the data point set, and the triangular relation is combined to obtain the conversion matrix, so that the matching operation flow is optimized, and the beneficial effects that the matching sequence of the data point set is not required, and the matching precision of the mark points is effectively improved are achieved.
Furthermore, the error factors are taken into consideration, an optimization function is constructed, and a multi-point optimization conversion matrix is carried out, so that a high-precision conversion matrix is obtained, the mapping precision is improved, and the error is reduced.
Fig. 3 shows a schematic structural diagram of an embodiment of the marker point registration apparatus based on a tetrahedral structure. As shown in fig. 3, the apparatus 300 includes: a data set acquisition module 310, a model construction module 320, a scaling determination module 330, a scaling module 340, a correspondence determination module 350, a rigid transformation module 360, a transformation matrix determination module 370.
A data set obtaining module 310, configured to obtain an image space data set and a target space data set, respectively, where the image space data set includes image space marker point information of a three-dimensional virtual image space, and the target space data set includes target space marker point information of a patient operation target space;
the model construction module 320 is configured to select four marker points in the image space data set as vertices to form an image space tetrahedral model, and select corresponding four marker points from the target space data set as vertices to form a target space tetrahedral model;
a scaling determination module 330 configured to determine a scaling of the image space data set with respect to the target space data set according to the image space tetrahedral model and the target space tetrahedral model;
the scaling module 340 is configured to scale the image space mark points in the image space data set according to the scaling ratio, so as to obtain a scaled image space data set;
a corresponding relationship determining module 350, configured to calculate, according to the information of the image space mark points in the zoomed image space data set and the information of the target space mark points, a correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set, so as to determine a sequence between each image space mark point in the zoomed image space data set, a sequence between each target space mark point in the zoomed image space data set, and a corresponding relationship between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set;
a rigid transformation module 360, configured to perform rigid transformation on the image space tetrahedral model and the target space tetrahedral model according to a sequence between the image space marker points, a sequence between the target space marker points, and a corresponding relationship between each image space marker point in the scaled image space data set and each target space marker point in the target space data set, so as to determine an image translation matrix and an image rotation matrix of the image space marker points relative to an image space coordinate system, and a target translation matrix and a target rotation matrix of the target space marker points relative to a target space coordinate system;
a transformation matrix determining module 370, configured to determine a transformation matrix according to the correspondence, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix, and the target rotation matrix, so as to register, according to the transformation matrix, each image space mark point in the image space data set with each target space mark point corresponding to the target space data set, so as to obtain a registered image space data set.
The specific working process of the marker point registration device based on the tetrahedral structure in the embodiment of the invention is the same as the method steps of the marker point registration method based on the tetrahedral structure, and the detailed description is omitted here.
According to the embodiment of the invention, the image space mark points and the target space mark points are selected to construct the tetrahedral structure, the constructed tetrahedral structure is utilized to realize automatic matching of the sequence of the data point set, and the triangular relation is combined to obtain the conversion matrix, so that the matching operation process is optimized, and the beneficial effects that the matching sequence of the data point set is not required, and the matching precision of the mark points is effectively improved are achieved.
Furthermore, the error factors are taken into consideration, an optimization function is constructed, and a multi-point optimization conversion matrix is carried out, so that a high-precision conversion matrix is obtained, the mapping precision is improved, and the error is reduced.
The embodiment of the invention also provides an operation navigation system, which comprises the marker point registration device based on the tetrahedral structure, wherein the registration working flow of the navigation system is the same as the method flow of the marker point registration method based on the tetrahedral structure, and the detailed description is omitted here.
Fig. 4 is a schematic structural diagram of an embodiment of the computer device of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computer device.
As shown in fig. 4, the computer apparatus may include: a processor (processor) 402, a Communications Interface 404, a memory 406, and a Communications bus 408.
Wherein: the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408. A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. Processor 402, for executing program 410, may specifically perform the relevant steps described above for the computer method embodiment.
In particular, program 410 may include program code comprising computer-executable instructions.
The processor 402 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the invention. The computer device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
A memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 410 may specifically be invoked by the processor 402 to cause the computer device to perform the following operations:
respectively acquiring an image space data set and a target space data set, wherein the image space data set comprises image space mark point information of a three-dimensional virtual image space, and the target space data set comprises target space mark point information of a patient operation target space;
selecting four mark points in the image space data set as vertexes to form an image space tetrahedral model, and selecting corresponding four mark points from the target space data set as vertexes to form a target space tetrahedral model;
determining a scaling of the image space data set relative to the target space data set based on the image space tetrahedral model and the target space tetrahedral model;
zooming the image space mark points in the image space data set according to the zooming proportion to obtain a zoomed image space data set;
calculating the correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set respectively so as to determine the sequence between the image space mark points in the zoomed image space data set, the sequence between the target space mark points and the corresponding relation between the image space mark points in the zoomed image space data set and the target space mark points in the target space data set;
according to the sequence among all image space mark points, the sequence among all target space mark points and the corresponding relation between all image space mark points in the zoomed image space data set and all target space mark points in the target space data set, carrying out rigid transformation on the image space tetrahedral model and the target space tetrahedral model, and determining an image translation matrix and an image rotation matrix of the image space mark points relative to an image space coordinate system and a target translation matrix and a target rotation matrix of the target space mark points relative to a target space coordinate system;
and determining a conversion matrix according to the corresponding relation, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix and the target rotation matrix, and registering each image space mark point in the image space data set and each target space mark point corresponding to the target space data set according to the conversion matrix to obtain a registered image space data set.
In an optional manner, after determining a transformation matrix according to the correspondence, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix, and the target rotation matrix, the method further includes:
acquiring the error range of the image space mark point and the target space mark point on the transformed image space after each image space mark point is transformed by the transformation matrix;
constructing an optimization function according to the error range;
and optimizing the conversion matrix according to the optimization function to obtain the optimized conversion matrix.
In an alternative mode, the image space tetrahedron includes an image triangle formed by three image space marker points, and the target space tetrahedron corresponds to a target triangle formed by three target space marker points;
determining a scaling of the image space data set relative to the target space data set from the image space tetrahedral model and the target space tetrahedral model, further comprising:
determining a normal vector and circumscribed circle center coordinates of the image triangle according to the coordinate information of the image space mark points and a triangle theorem;
according to the coordinate information of the target space mark point, the triangle theorem, the normal vector of the target triangle and the coordinates of the circumcircle center;
determining the radius of a circumscribed circle of the image triangle according to the normal vector and the coordinates of the center of the circumscribed circle of the image triangle, and determining the radius of the circumscribed circle of the target triangle according to the normal vector and the coordinates of the center of the circumscribed circle of the target triangle;
and determining the scaling according to the radius of the circumscribed circle of the image triangle and the radius of the circumscribed circle of the target triangle.
In an optional manner, calculating a correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set, respectively, to determine an order between each image space mark point in the zoomed image space data set, an order between each target space mark point, and a correspondence between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set, further includes:
respectively calculating image multi-dimensional distance vectors of each image space mark point and each other image space mark point in the zoomed image space data set and target multi-dimensional distance vectors of each target space mark point and each other target space mark point according to the image space mark point information and the target space mark point information in the zoomed image space data set;
and determining the sequence between the image space mark points in the zoomed image space data set, the sequence between the target space mark points, and the corresponding relation between the image space mark points in the zoomed image space data set and the target space mark points in the target space data set according to the correlation between the image multi-dimensional distance vector and the target multi-dimensional distance vector.
In an optional manner, according to an order between the image space marker points, an order between the target space marker points, and a correspondence relationship between each image space marker point in the scaled image space data set and each target space marker point in the target space data set, rigidly transforming the image space tetrahedral model and the target space tetrahedral model, and determining an image translation matrix and an image rotation matrix of the image space marker points relative to an image space coordinate system, and a target translation matrix and a target rotation matrix of the target space marker points relative to a target space coordinate system, further include:
moving the center of a circumscribed circle of the image triangle to the origin of an image space coordinate system to obtain an image translation matrix;
rotating the image triangle around the x axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the x axis of the image space coordinate system;
rotating the image triangle around the y axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the y axis of the image space coordinate system;
and according to the corresponding relation, rotating the image triangle around the z axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the z axis of the image space coordinate system.
In an optional manner, according to an order between the image space marker points, an order between the target space marker points, and a correspondence between each image space marker point in the scaled image space data set and each target space marker point in the target space data set, rigid transformation is performed on the image space tetrahedral model and the target space tetrahedral model, and an image translation matrix and an image rotation matrix of the image space marker points relative to an image space coordinate system, and a target translation matrix and a target rotation matrix of the target space marker points relative to a target space coordinate system are determined, further including:
moving the circle center of the circumscribed circle of the target triangle to the origin of a target space coordinate system to obtain a target translation matrix;
rotating the target triangle around the x axis of the target space coordinate system, and calculating to obtain a rotation matrix rotating around the x axis of the target space coordinate system;
and rotating the target triangle around the y axis of the target space coordinate system, and calculating to obtain a rotation matrix rotating around the y axis of the target space coordinate system.
According to the embodiment of the invention, the tetrahedral structure is constructed by selecting the image space mark points and the target space mark points, the constructed tetrahedral structure is utilized to realize automatic matching of the sequence of the data point set, and the triangular relation is combined to obtain the conversion matrix, so that the matching operation flow is optimized, and the beneficial effects that the matching sequence of the data point set is not required, and the matching precision of the mark points is effectively improved are achieved.
Furthermore, the error factors are taken into consideration, an optimization function is constructed, and a multi-point optimization conversion matrix is carried out, so that a high-precision conversion matrix is obtained, the mapping precision is improved, and the error is reduced.
An embodiment of the present invention provides a computer-readable storage medium, where the storage medium stores at least one executable instruction, and when the executable instruction is executed on a computer device/apparatus, the computer device/apparatus executes a tetrahedral structure-based mark point registration method in any of the above-mentioned method embodiments.
The executable instructions may be specifically configured to cause a computer device/apparatus to perform the following:
respectively acquiring an image space data set and a target space data set, wherein the image space data set comprises image space mark point information of a three-dimensional virtual image space, and the target space data set comprises target space mark point information of a patient operation target space;
selecting four mark points in the image space data set as vertexes to form an image space tetrahedral model, and selecting corresponding four mark points from the target space data set as vertexes to form a target space tetrahedral model;
determining a scaling of the image space data set relative to the target space data set based on the image space tetrahedral model and the target space tetrahedral model;
zooming the image space mark points in the image space data set according to the zooming proportion to obtain a zoomed image space data set;
calculating the correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set respectively so as to determine the sequence among the image space mark points in the zoomed image space data set, the sequence among the target space mark points and the corresponding relationship among the image space mark points in the zoomed image space data set and the target space mark points in the target space data set;
performing rigid transformation on the image space tetrahedral model and the target space tetrahedral model according to the sequence among the image space marking points, the sequence among the target space marking points and the corresponding relationship between the image space marking points in the zoomed image space data set and the target space marking points in the target space data set, and determining an image translation matrix and an image rotation matrix of the image space marking points relative to an image space coordinate system and a target translation matrix and a target rotation matrix of the target space marking points relative to a target space coordinate system;
and determining a conversion matrix according to the corresponding relation, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix and the target rotation matrix, and registering each image space mark point in the image space data set with each corresponding target space mark point in the target space data set according to the conversion matrix to obtain a registered image space data set.
In an optional manner, after determining a transformation matrix according to the correspondence, the scaling, the image translation matrix, the image rotation matrix, the target translation matrix, and the target rotation matrix, the method further includes:
acquiring the error range of the image space mark point and the target space mark point on the transformed image space after each image space mark point is transformed by the transformation matrix;
constructing an optimization function according to the error range;
and optimizing the conversion matrix according to the optimization function to obtain the optimized conversion matrix.
In an alternative mode, the image space tetrahedron includes an image triangle formed by three image space marker points, and the target space tetrahedron corresponds to a target triangle formed by three target space marker points;
determining a scaling of the image space data set relative to the target space data set from the image space tetrahedral model and the target space tetrahedral model, further comprising:
determining a normal vector and circumscribed circle center coordinates of the image triangle according to the coordinate information of the image space mark points and the triangle theorem;
according to the coordinate information of the target space mark point, the triangle theorem, the normal vector of the target triangle and the coordinates of the circumcircle center;
determining the radius of the circumscribed circle of the image triangle according to the normal vector and the coordinates of the center of the circumscribed circle of the image triangle, and determining the radius of the circumscribed circle of the target triangle according to the normal vector and the coordinates of the center of the circumscribed circle of the target triangle;
and determining the scaling according to the radius of the circumscribed circle of the image triangle and the radius of the circumscribed circle of the target triangle.
In an optional manner, calculating a correlation between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set according to the image space mark point information and the target space mark point information in the zoomed image space data set, respectively, to determine an order between each image space mark point in the zoomed image space data set, an order between each target space mark point, and a correspondence between each image space mark point in the zoomed image space data set and each target space mark point in the target space data set, further includes:
respectively calculating image multi-dimensional distance vectors of each image space mark point and each other image space mark point in the zoomed image space data set and target multi-dimensional distance vectors of each target space mark point and each other target space mark point according to the image space mark point information and the target space mark point information in the zoomed image space data set;
and determining the sequence between the image space mark points in the zoomed image space data set, the sequence between the target space mark points, and the corresponding relation between the image space mark points in the zoomed image space data set and the target space mark points in the target space data set according to the correlation between the image multi-dimensional distance vector and the target multi-dimensional distance vector.
In an optional manner, according to a sequence between the image space markers, a sequence between the target space markers, and a correspondence between each image space marker in the scaled image space data set and each target space marker in the target space data set, rigidly transforming the image space tetrahedral model and the target space tetrahedral model, and determining an image translation matrix and an image rotation matrix of the image space markers relative to an image space coordinate system, and a target translation matrix and a target rotation matrix of the target space markers relative to a target space coordinate system, further includes:
moving the center of a circumscribed circle of the image triangle to the origin of an image space coordinate system to obtain an image translation matrix;
rotating the image triangle around the x axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the x axis of the image space coordinate system;
rotating the image triangle around the y axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the y axis of the image space coordinate system;
and according to the corresponding relation, rotating the image triangle around the z axis of the image space coordinate system, and calculating to obtain a rotation matrix rotating around the z axis of the image space coordinate system.
In an optional manner, according to an order between the image space marker points, an order between the target space marker points, and a correspondence between each image space marker point in the scaled image space data set and each target space marker point in the target space data set, rigid transformation is performed on the image space tetrahedral model and the target space tetrahedral model, and an image translation matrix and an image rotation matrix of the image space marker points relative to an image space coordinate system, and a target translation matrix and a target rotation matrix of the target space marker points relative to a target space coordinate system are determined, further including:
moving the center of a circumscribed circle of the target triangle to the origin of a target space coordinate system to obtain a target translation matrix;
rotating the target triangle around the x axis of the target space coordinate system, and calculating to obtain a rotation matrix rotating around the x axis of the target space coordinate system;
and rotating the target triangle around the y axis of the target space coordinate system, and calculating to obtain a rotation matrix rotating around the y axis of the target space coordinate system.
According to the embodiment of the invention, the image space mark points and the target space mark points are selected to construct the tetrahedral structure, the constructed tetrahedral structure is utilized to realize automatic matching of the sequence of the data point set, and the triangular relation is combined to obtain the conversion matrix, so that the matching operation process is optimized, and the beneficial effects that the matching sequence of the data point set is not required, and the matching precision of the mark points is effectively improved are achieved.
Furthermore, the error factors are taken into consideration, an optimization function is constructed, and a multi-point optimization conversion matrix is carried out, so that a high-precision conversion matrix is obtained, the mapping precision is improved, and the error is reduced.
The embodiment of the invention provides a marker point registration device based on a tetrahedral structure, which is used for executing the marker point registration method based on the tetrahedral structure.
Embodiments of the present invention provide a computer program that can be called by a processor to enable a computer device to execute a marker point registration method based on a tetrahedral structure in any of the above method embodiments.
Embodiments of the present invention provide a computer program product, which includes a computer program stored on a computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are run on a computer, the computer executes the method for registering marker points based on a tetrahedral structure in any of the above-mentioned method embodiments.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the devices in an embodiment may be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limited to the order of execution unless otherwise specified.