CN114140504A - Three-dimensional interactive biomedical image registration method - Google Patents

Three-dimensional interactive biomedical image registration method Download PDF

Info

Publication number
CN114140504A
CN114140504A CN202111479226.0A CN202111479226A CN114140504A CN 114140504 A CN114140504 A CN 114140504A CN 202111479226 A CN202111479226 A CN 202111479226A CN 114140504 A CN114140504 A CN 114140504A
Authority
CN
China
Prior art keywords
image
dimensional
dimensional model
registered
template label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111479226.0A
Other languages
Chinese (zh)
Other versions
CN114140504B (en
Inventor
屈磊
罗文婷
吴军
王慧敏
李园园
韩婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202111479226.0A priority Critical patent/CN114140504B/en
Publication of CN114140504A publication Critical patent/CN114140504A/en
Application granted granted Critical
Publication of CN114140504B publication Critical patent/CN114140504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a three-dimensional interactive biomedical image registration method, which overcomes the defect that the accuracy of a biomedical image registration algorithm is difficult to improve compared with the prior art. The invention comprises the following steps: acquiring a three-dimensional biomedical image to be registered and a template label image; generating a three-dimensional model of the template label image; visualizing the three-dimensional model and the image to be registered; interactively adjusting a template label image three-dimensional model according to an image to be registered; obtaining corresponding matching points; and obtaining a registration result. According to the invention, the three-dimensional biomedical image is three-dimensionally displayed, and the real-time extraction information of the characteristic points of the mouse feedback image is obtained, so that the existence of inaccurate characteristic points is avoided, and the precision of image registration is improved; the operation difficulty of interactive registration is greatly reduced, and the operation efficiency of man-machine interaction is improved.

Description

Three-dimensional interactive biomedical image registration method
Technical Field
The invention relates to the technical field of three-dimensional biomedical image processing, in particular to a three-dimensional interactive biomedical image registration method.
Background
The three-dimensional biomedical images have abundant information content, are convenient to observe and very intuitive, and occupy more and more important positions for analyzing the three-dimensional biomedical images, wherein the integration analysis of animal brain structural images and brain functional images becomes an important research subject along with the development of brain plans, and the registration is an important step before the brain image analysis.
Image registration is a process of matching and superimposing two images acquired under different conditions, and is widely applied to the fields of remote sensing data analysis, computer vision, image processing and the like. Visualization technology is a technology for displaying data on a computer screen in an image graph mode, and is one of the hot spots of computer graphics and image processing research at present. With the research of the registration technology by scientific researchers, a plurality of new technologies and new methods are also emerged in the image registration technology. The image registration techniques are various, because the various image registration techniques are not widely applicable to all fields, and different application environments need to integrate various factors to select the corresponding image registration techniques.
The existing image registration method mainly comprises a traditional method and a deep learning method. Conventional methods are generally classified into a grayscale-based method and a feature-based method according to image information used in image registration. Compared with the traditional medical image registration method, the greatest contribution of deep learning in research results of medical image registration is to improve the problem of low processing speed. Although registration based on deep learning is much faster than conventional algorithms, conventional algorithms are still much higher in accuracy than depth algorithms at present.
Meanwhile, most of the traditional image registration researches are focused on the registration method based on the image characteristics, and the registration method based on the image characteristics is more suitable for the registration between the images with complex space transformation compared with the method based on the image gray scale. Among many image features, the image interest point features are widely researched and applied due to the advantages that the positioning is accurate, and the matched interest point coordinates can be directly used for calculating the spatial transformation relation between the images.
Although the feature-based image registration method has been the mainstream of research to achieve better effect on image registration, it has a general problem: because the imaged three-dimensional biomedical image may have some conditions of cavities and artifacts, it is difficult to ensure all local optima by the acquired feature points. If the registration accuracy requirement is particularly high, it is difficult to improve the accuracy of the image registration in this case.
Therefore, how to further improve the local registration accuracy under the condition of high requirement on the registration accuracy becomes an urgent technical problem to be solved.
Disclosure of Invention
The invention aims to solve the defect that the accuracy of a biomedical image registration algorithm in the prior art is difficult to improve, and provides a three-dimensional interactive biomedical image registration method to solve the problem.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a three-dimensional interactive biomedical image registration method, comprising the steps of:
11) acquiring a three-dimensional biomedical image to be registered and a template label image;
12) generating a three-dimensional model of the template label image: extracting contour points of the template label image in different regions according to different label values of the template label image, then carrying out voxel grid filtering, and then carrying out greedy projection triangulation processing to finally obtain a three-dimensional model of the template label image;
13) visualizing the three-dimensional model and the image to be registered: the three-dimensional visualization of the template label image three-dimensional model and the image to be registered is realized by using OpenGL;
14) and (3) interactively adjusting the template label image three-dimensional model according to the image to be registered: continuously three-dimensionally and interactively adjusting the three-dimensional model of the template label image according to the displayed slice of the image to be registered, so that the three-dimensional model of the template label image is preliminarily aligned with the image to be registered;
15) obtaining corresponding matching points: sampling a group of corresponding matching points of the three-dimensional model before and after adjustment to form a registration point set;
16) and obtaining a registration result: and solving a deformation field by using the registration point set, and then obtaining a biomedical image registration result image by using deformation field interpolation.
The method for generating the three-dimensional model of the template label image comprises the following steps:
21) setting different pixels in the template label image as different labels, traversing each pixel of the template label image, and recording each different pixel value ai(i<=n,i∈N+) Obtaining the total number n of label values of the template label image, and recording all the label values A, namely A ═ a1,a2,...,an};
22) For a three-dimensional template label image, the template label image is divided into a series of two-dimensional image slices along the X, Y, Z axis, and the X, Y, Z axis maximum values of the image are recorded as Xmax、Ymax、Zmax
23) For each two-dimensional image slice of the previous step, according to the label value aiDifferent divisions into different zones mi(i<=n,i∈N+) For each area m on the pictureiCarrying out contour extraction to obtain contour points of each region, and storing coordinate values of each point to obtain a dense point set C, C ═ Cm1, Cm2<N, i belongs to N +) is the point set of the ith region;
24) c downsampling the point set by voxel grid filtering, i.e. setting the length, height, width and size as Xmax/Q、Ymax/Q、ZmaxA 3D voxel grid of/Q, where every voxel, i.e. all points in the 3D box, are approximated by their centroid to a point to filter out a set of points that are too close together, where Q ═ X (X)max+Ymax+Zmax)*6.5f;
25) Performing surface rendering on the point set subjected to downsampling by greedy projection triangulation to obtain a three-dimensional model of each area of the template label image
Figure BDA0003394371840000031
Wherein, the number of the point sets corresponding to each three-dimensional model
Figure BDA0003394371840000032
Figure BDA0003394371840000033
Is a three-dimensional model of the ith region,
Figure BDA0003394371840000034
is the set of points of the three-dimensional model of the ith region.
The visual three-dimensional model and the image to be registered comprise the following steps:
31) describing the three-dimensional model of the template label image through a vertex sequence, displaying the three-dimensional models of the n regions in different colors at random, starting perspective and color mixing, and finally displaying the three-dimensional model of the template label image;
32) and respectively extracting X, Y, Z two-dimensional slices at the specified positions of the axes from the three-dimensional biomedical image to be registered to obtain a 2D texture image, mapping the texture to a quadrangle at the corresponding position, and specifying the part of each vertex corresponding to the texture to realize the cross display of the axis section of the image to obtain the three-dimensional visualization of the image to be registered.
The interactive adjustment of the surface of the three-dimensional model according to the image to be registered comprises the following steps:
41) acquiring a two-dimensional coordinate of a QT interface currently clicked by a left mouse button, and converting the two-dimensional coordinate into a two-dimensional coordinate of an OpenGL window; then, assuming that the Z coordinate of 3D is 0, acquiring a viewport, a model and a projection matrix, performing matrix conversion, and converting the coordinate of a window into a world coordinate;
42) obtaining the actual position in the space according to the OpenGL window size, the rotation matrix, the scaling multiple and the left-right up-down translation distance;
43) assuming two different third-dimensional coordinate values, obtaining two three-dimensional coordinate points, establishing a spatial straight line, and obtaining the intersection position of the straight line and the current selected axis slice, wherein the position is the three-dimensional coordinate of the selected position;
44) after the three-dimensional coordinates of the selected position are obtained, calculating the range of the selected position along the surface of the contour of the selected area, wherein the range is determined by parameters of height and width, the height refers to the range selected on the surface of the contour of the area on the section, and the width refers to the range of the number of the front and rear selected sections; the method comprises the following specific steps:
calculating a point set of the three-dimensional model with the distance between the current section and the selected position within height, continuously searching the point set within a small range from the selected position to obtain a contour line point set of the three-dimensional model on the slice, searching the point sets with the distance between the front and the back of the slice within width through the contour line, and taking all the searched point sets as a selected area;
45) according to the distance selected and moved by the left mouse button, calculating the moving distance of each point of the currently selected area, ensuring that the selected area moves to the corresponding position and is smooth, wherein the calculation form of the moving distance G (x, y, z) is as follows:
Figure BDA0003394371840000041
wherein M is equal to the moving distance of the mouse, dx, dy, dz are the difference between the x, y, z value of each point from the mouse click position, sigma is the set sigma value,
Figure BDA0003394371840000042
if the value range of (1) is between (0) and (0), carrying out gradual change color visualization on the value of each point in the currently selected area, and ensuring that the edge is below 0.25 by moving each time;
46) and (4) slicing the sliding shaft, and repeating the steps from 41) to 43) according to the displayed slice of the image to be registered, and continuously changing the outline of the three-dimensional model until the adjusted outline of the three-dimensional model is initially aligned with the image to be registered, so as to obtain an initial curved surface adjustment result.
The obtaining of the corresponding matching points comprises the following steps:
51) specifying each area mi(i<=n,i∈N+) Number of point sets
Figure BDA0003394371840000043
Default setting
Figure BDA0003394371840000044
52) Region-wise downsampling to a specified number for pre-and post-adjusted three-dimensional models
Figure BDA0003394371840000045
Figure BDA0003394371840000046
This set of corresponding matched point sets is the registration point set.
Advantageous effects
Compared with the prior art, the three-dimensional interactive biomedical image registration method has the advantages that the three-dimensional biomedical image is displayed in a three-dimensional mode, the real-time extraction information of the characteristic points of the image fed back by the mouse is obtained, the existence of inaccurate characteristic points is avoided, and the image registration precision is improved; the operation difficulty of interactive registration is greatly reduced, and the operation efficiency of man-machine interaction is improved.
The invention also comprehensively considers the conditions of artifacts, cavities and overlarge local difference in the three-dimensional biomedical images, and simply and efficiently improves the registration precision of the local details of the images.
Drawings
FIG. 1 is a sequence diagram of the method of the present invention;
FIG. 2 is a schematic process diagram of the method of the present invention;
FIG. 3a is a section of a selected three-dimensional rat brain MRI template image;
FIG. 3b is a section corresponding to the selected three-dimensional rat brain MRI image to be registered;
FIG. 3c is a graph of the result of registering FIG. 3a to FIG. 3b using the conventional image registration tool NiftyReg;
FIG. 3d is a graph of the result of registering FIG. 3a to FIG. 3b using the conventional image registration tool Elastix;
FIG. 3e is a graph of the result of registering FIG. 3a to FIG. 3b by the method of the present invention;
Detailed Description
So that the manner in which the above recited features of the present invention can be understood and readily understood, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings, wherein:
because the imaged biomedical image may have some continuous cavities, too large deformation and artifacts, the full-automatic registration algorithm often cannot achieve the optimal registration effect, and therefore, it is necessary to improve the registration accuracy of the image in a human-computer interaction manner. In some interactive registration methods, point set alignment on a slice is realized in only one direction, but cross-layer alignment cannot be realized, and region cross-layer alignment can be realized by freely adjusting the point set contour in three dimensions, so that better matched feature points are obtained, and the registration accuracy is finally improved. As shown in fig. 1 and fig. 2, a three-dimensional interactive biomedical image registration method according to the present invention includes the following steps:
the method comprises the steps of firstly, obtaining a three-dimensional biomedical image to be registered and a template label image.
Secondly, generating a three-dimensional model of the template label image: according to different label values of the template label image, extracting contour points of the template label image in different regions, then carrying out voxel grid filtering, and then carrying out greedy projection triangulation processing to finally obtain a three-dimensional model of the template label image.
Considering that the characteristics of the template label image are composed of a plurality of label values, and each label value delimits a certain area, the three-dimensional model closer to the original image can be fitted by distinguishing the label values to extract contour points of each area. Considering that the final three-dimensional model has incomplete parts of holes due to the fact that the contour point sets are extracted on only one fixed axis slice, the contour point sets are extracted once along the X, Y, Z axis, and therefore, although the redundant point sets exist, the whole contour is complete. Given that the excessive and redundant number of point sets can make subsequent three-dimensional model rendering and adjustment difficult, the point sets for each region are downsampled using a voxel grid filter without destroying the geometry of the point sets themselves. Considering that the density change of the point set after down sampling is uniform and smooth, rapid triangulation is carried out on the point set by greedy projection triangulation, and a three-dimensional model similar to the original module label image is rapidly obtained.
The specific steps of generating the three-dimensional model of the template label image are as follows:
(1) setting different pixels in the template label image as different labels, traversing each pixel of the template label image, and recording each different pixel value ai(i<=n,i∈N+) Obtaining the total number n of label values of the template label image, and recording all the label values A, namely A ═ a1,a2,...,an};
(2) For a three-dimensional template label image, the template label image is divided into a series of two-dimensional image slices along the X, Y, Z axis, and the X, Y, Z axis maximum values of the image are recorded as Xmax、Ymax、Zmax
(3) For each two-dimensional image slice of the previous step, according to the label value aiDifferent divisions into different zones mi(i<=n,i∈N+) For each area m on the pictureiExtracting the contour to obtain contour points of each region, storing the coordinate values of each point to obtain a dense point set C of each region,
Figure BDA0003394371840000061
wherein
Figure BDA0003394371840000062
Is the point set of the ith region;
(4) downsampling the set of points C with voxel grid filtering: because too many point sets bring difficulty to the subsequent drawing and adjustment of the three-dimensional model, the point sets of each region are down sampled by using a voxel grid filter, and the geometrical structure of the point sets is not damagedmax/Q、Ymax/Q、ZmaxA 3D voxel grid of/Q, where every voxel, i.e. all points in the 3D frame, approximates the other points in the voxel with their centroid, where Q ═ (X ═ ismax+Ymax+Zmax)*6.5f;
(5) To face downwardsPerforming surface rendering on the sampled point set by greedy projection triangulation to obtain a three-dimensional model of each area of the template label image: because the density change of the point set after down sampling is uniform and smooth, rapid triangulation is carried out on the point set by greedy projection triangulation, and a three-dimensional model similar to the original module label image is rapidly obtained
Figure BDA0003394371840000071
Wherein, the number of the point sets corresponding to each three-dimensional model
Figure BDA0003394371840000072
Figure BDA0003394371840000073
Is a three-dimensional model of the ith region,
Figure BDA0003394371840000074
is the set of points of the three-dimensional model of the ith region.
Thirdly, visualizing the three-dimensional model and the image to be registered: and realizing three-dimensional visualization of the three-dimensional model and the image to be registered by using OpenGL. The method comprises the following specific steps:
(1) describing the three-dimensional model of the template label image through a vertex sequence, setting random and different colors for the three-dimensional models of the n regions for display, starting perspective and color mixing, and finally displaying the three-dimensional model of the template label image. In order to facilitate the adjustment of subsequent operations, the three-dimensional model can define the transparency degree in a user-defined manner so as to better observe the change of the contours of the front and rear slices during the adjustment of the current slice; meanwhile, only the three-dimensional model of the current region needing to be adjusted can be displayed, and only the region which is interested or has overlarge difference can be adjusted; the edge is below 0.25 (blue part exists in the color), so that the convenience area moves smoothly.
(2) The method comprises the steps of respectively extracting X, Y, Z two-dimensional slices at the specified positions of an axis from a three-dimensional biomedical image to be registered to obtain a 2D texture image, mapping textures to quadrangles at corresponding positions, and specifying parts of each vertex corresponding to the textures to realize cross display of the axis section of the image, wherein the axis section can be switched through a mouse pulley or a scroll bar.
Step four, interactively adjusting the surface of the three-dimensional model according to the image to be registered: and adjusting the three-dimensional model through three-dimensional interaction to obtain an image to be registered, wherein each contour surface of the three-dimensional model is close to the image to be registered.
In order to adjust the three-dimensional model generated in the previous step, the premise is that the coordinates of the selected position are accurately picked up in real time, a space straight line is obtained through the coordinate system conversion of OpenGL, and then the intersection point of the plane and the space straight line is rapidly obtained by combining the currently selected axis slice. After the intersection point is obtained, considering the situation that the image to be registered and the template image have large appearance difference, it is necessary to smoothly adjust the three-dimensional model, and here, the moving intensity is constrained by setting parameters and the constraint is embodied by visualizing the gradual change color. The invention can freely adjust the point set outline in three dimensions, and the moving process is real-time and efficient.
The specific steps of interactively adjusting the surface of the three-dimensional model according to the image to be registered are as follows:
(1) three-dimensional coordinates of a selected location on the cut sheet are picked. Firstly, the two-dimensional coordinate of the QT interface currently clicked by the left mouse button is acquired and converted into the two-dimensional coordinate of the OpenGL window (because the QT screen coordinate system is opposite to the OpenGL in the vertical direction). Then, assuming that the Z coordinate of 3D is 0, the viewport, the model, and the projection matrix are acquired, and matrix conversion is performed by the gluunoproject function to convert the coordinates of the window into world coordinates. And then, obtaining the actual position in the space according to the size of the OpenGL window, the rotation matrix, the scaling multiple, the left-right up-down translation distance and the like. And finally, assuming two different third-dimensional coordinate values to obtain two three-dimensional coordinate points, establishing a spatial straight line, and obtaining the position where the straight line is intersected with the current selected axis slice, wherein the position is the three-dimensional coordinate of the selected position.
(2) After the three-dimensional coordinates of the selected position are obtained, a certain range of the selected position is calculated along the contour surface of the selected area and is determined by parameters of height and width, wherein height refers to the range selected on the contour surface of the area on the tangent plane, and width refers to the range of the number of the front and rear selected tangent planes. Specifically, a point set of the three-dimensional model with the distance between the current tangent plane and the selected position within height is calculated, then the point set within a small range is continuously searched from the selected position to obtain a contour line point set of the three-dimensional model on the slice, the point sets with the distance within width before and after the slice are searched through the contour line, and all the searched point sets are selected areas.
(3) And calculating the moving distance of each point of the currently selected area according to the distance selected and moved by the left mouse button, and ensuring that the selected area is moved to the corresponding position and is still smooth. The moving distance is calculated in the form:
Figure BDA0003394371840000081
wherein, M is equal to the moving distance of the current mouse, dx, dy and do are the difference of x, y and z values of each point from the selected position of the mouse respectively, and sigma is a set sigma value. In addition, the first and second substrates are,
Figure BDA0003394371840000082
if the value range of (1) is between (0), then the value range of (1) will be used for each point of the currently selected area
Figure BDA0003394371840000083
The value of (2) is visualized in a gradual change color, each movement ensures that the edge is below 0.25 (the blue part exists on the color), and the area is convenient to move smoothly;
(4) and (3) slicing the sliding shaft, repeating the steps (1) to (3) according to the displayed slice of the image to be registered, and continuously changing the profile of the three-dimensional model until the adjusted profile of the three-dimensional model is initially aligned with the image to be registered, so as to obtain an initial curved surface adjustment result.
And fifthly, obtaining corresponding matching points: and sampling a group of corresponding matching points of the three-dimensional model before and after adjustment to form a registration point set. The method comprises the following specific steps:
specifying each area mi(i<=n,i∈N+) Number of point sets required
Figure BDA0003394371840000091
(1) Default setting
Figure BDA0003394371840000092
Figure BDA0003394371840000093
(2) Region-wise downsampling to a specified number for pre-and post-adjusted three-dimensional models
Figure BDA0003394371840000094
Figure BDA0003394371840000095
This set of corresponding matched point sets is the point set that is ultimately used for registration.
Sixthly, obtaining a registration result: and solving a deformation field by using a matched group of points by using a traditional method, and then obtaining a registration result image by using traditional deformation field interpolation.
As shown in fig. 3a and 3b, both of them are rat brain MRI images, the former being used as template images and the latter being used as images to be registered, with the goal of registering fig. 3b to fig. 3 a. Fig. 3c shows the registration result obtained by using the conventional image registration tool Niftyreg, fig. 3d shows the registration result obtained by using the conventional image registration tool Elastix, and fig. 3e shows the registration result obtained by using the method of the present invention, and by comparison, the registration effect of the method proposed by the present invention is better, that is, the registration effect is more similar to the structure and brightness distribution of the template image of fig. 3a at the position marked by the box in the figure.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. A three-dimensional interactive biomedical image registration method is characterized by comprising the following steps:
11) acquiring a three-dimensional biomedical image to be registered and a template label image;
12) generating a three-dimensional model of the template label image: extracting contour points of the template label image in different regions according to different label values of the template label image, then carrying out voxel grid filtering, and then carrying out greedy projection triangulation processing to finally obtain a three-dimensional model of the template label image;
13) visualizing the three-dimensional model and the image to be registered: the three-dimensional visualization of the template label image three-dimensional model and the image to be registered is realized by using OpenGL;
14) and (3) interactively adjusting the template label image three-dimensional model according to the image to be registered: continuously three-dimensionally and interactively adjusting the three-dimensional model of the template label image according to the displayed slice of the image to be registered, so that the three-dimensional model of the template label image is preliminarily aligned with the image to be registered;
15) obtaining corresponding matching points: sampling a group of corresponding matching points of the three-dimensional model before and after adjustment to form a registration point set;
16) and obtaining a registration result: and solving a deformation field by using the registration point set, and then obtaining a biomedical image registration result image by using deformation field interpolation.
2. The three-dimensional interactive biomedical image registration method according to claim 1, wherein said generating a three-dimensional model of the template tag image comprises the steps of:
21) setting different pixels in the template label image as different labels, traversing each pixel of the template label image, and recording each different pixel value ai(i<=n,i∈N+) Obtaining the total number n of label values of the template label image, and recording all the label values A, namely A ═ a1,a2,...,an};
22) Aiming at the three-dimensional template label image, the template label image is usedDividing the image into a series of two-dimensional image slices along the X, Y, Z axis respectively, and recording the X, Y, Z axis maximum value of the image as Xmax、Ymax、Zmax
23) For each two-dimensional image slice of the previous step, according to the label value aiDifferent divisions into different zones mi(i<=n,i∈N+) For each area m on the pictureiCarrying out contour extraction to obtain contour points of each region, and storing coordinate values of each point to obtain a dense point set C, C ═ Cm1, Cm2<N, i belongs to N +) is the point set of the ith region;
24) c downsampling the point set by voxel grid filtering, i.e. setting the length, height, width and size as Xmax/Q、Ymax/Q、ZmaxA 3D voxel grid of/Q, where every voxel, i.e. all points in the 3D box, are approximated by their centroid to a point to filter out a set of points that are too close together, where Q ═ X (X)max+Ymax+Zmax)*6.5f;
25) Performing surface rendering on the point set subjected to downsampling by greedy projection triangulation to obtain a three-dimensional model of each area of the template label image
Figure FDA0003394371830000021
Wherein, the number of the point sets corresponding to each three-dimensional model
Figure FDA0003394371830000022
Is a three-dimensional model of the ith region,
Figure FDA0003394371830000023
is the set of points of the three-dimensional model of the ith region.
3. The three-dimensional interactive biomedical image registration method according to claim 1, wherein said visualizing the three-dimensional model and the image to be registered comprises the steps of:
31) describing the three-dimensional model of the template label image through a vertex sequence, displaying the three-dimensional models of the n regions in different colors at random, starting perspective and color mixing, and finally displaying the three-dimensional model of the template label image;
32) and respectively extracting X, Y, Z two-dimensional slices at the specified positions of the axes from the three-dimensional biomedical image to be registered to obtain a 2D texture image, mapping the texture to a quadrangle at the corresponding position, and specifying the part of each vertex corresponding to the texture to realize the cross display of the axis section of the image to obtain the three-dimensional visualization of the image to be registered.
4. The three-dimensional interactive biomedical image registration method according to claim 1, wherein the interactive adjustment of the three-dimensional model surface according to the image to be registered comprises the following steps:
41) acquiring a two-dimensional coordinate of a QT interface currently clicked by a left mouse button, and converting the two-dimensional coordinate into a two-dimensional coordinate of an OpenGL window; then, assuming that the Z coordinate of 3D is 0, acquiring a viewport, a model and a projection matrix, performing matrix conversion, and converting the coordinate of a window into a world coordinate;
42) obtaining the actual position in the space according to the OpenGL window size, the rotation matrix, the scaling multiple and the left-right up-down translation distance;
43) assuming two different third-dimensional coordinate values, obtaining two three-dimensional coordinate points, establishing a spatial straight line, and obtaining the intersection position of the straight line and the current selected axis slice, wherein the position is the three-dimensional coordinate of the selected position;
44) after the three-dimensional coordinates of the selected position are obtained, calculating the range of the selected position along the surface of the contour of the selected area, wherein the range is determined by parameters of height and width, the height refers to the range selected on the surface of the contour of the area on the section, and the width refers to the range of the number of the front and rear selected sections; the method comprises the following specific steps:
calculating a point set of the three-dimensional model with the distance between the current section and the selected position within height, continuously searching the point set within a small range from the selected position to obtain a contour line point set of the three-dimensional model on the slice, searching the point sets with the distance between the front and the back of the slice within width through the contour line, and taking all the searched point sets as a selected area;
45) according to the distance selected and moved by the left mouse button, calculating the moving distance of each point of the currently selected area, ensuring that the selected area moves to the corresponding position and is smooth, wherein the calculation form of the moving distance G (x, y, z) is as follows:
Figure FDA0003394371830000031
wherein M is equal to the moving distance of the mouse, dx, dy, dz are the difference between the x, y, z value of each point from the mouse click position, sigma is the set sigma value,
Figure FDA0003394371830000032
if the value range of (1) is between (0) and (0), carrying out gradual change color visualization on the value of each point in the currently selected area, and ensuring that the edge is below 0.25 by moving each time;
46) and (4) slicing the sliding shaft, and repeating the steps from 41) to 43) according to the displayed slice of the image to be registered, and continuously changing the outline of the three-dimensional model until the adjusted outline of the three-dimensional model is initially aligned with the image to be registered, so as to obtain an initial curved surface adjustment result.
5. The three-dimensional interactive biomedical image registration method according to claim 1, characterized in that the obtaining of the corresponding matching points comprises the following steps:
51) specifying each area mi(i<=n,i∈N+) Number of point sets
Figure FDA0003394371830000033
Default setting
Figure FDA0003394371830000034
Figure FDA0003394371830000035
52) For the pre-adjustment and the adjustmentThe integrated three-dimensional model is down-sampled to a specified number according to regions
Figure FDA0003394371830000036
Figure FDA0003394371830000037
This set of corresponding matched point sets is the registration point set.
CN202111479226.0A 2021-12-06 2021-12-06 Three-dimensional interactive biomedical image registration method Active CN114140504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111479226.0A CN114140504B (en) 2021-12-06 2021-12-06 Three-dimensional interactive biomedical image registration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111479226.0A CN114140504B (en) 2021-12-06 2021-12-06 Three-dimensional interactive biomedical image registration method

Publications (2)

Publication Number Publication Date
CN114140504A true CN114140504A (en) 2022-03-04
CN114140504B CN114140504B (en) 2024-03-01

Family

ID=80384421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111479226.0A Active CN114140504B (en) 2021-12-06 2021-12-06 Three-dimensional interactive biomedical image registration method

Country Status (1)

Country Link
CN (1) CN114140504B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977162A (en) * 2023-09-25 2023-10-31 福建自贸试验区厦门片区Manteia数据科技有限公司 Image registration method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182699A (en) * 2017-12-28 2018-06-19 北京天睿空间科技股份有限公司 Three-dimensional registration method based on two dimensional image local deformation
US20210090272A1 (en) * 2019-09-24 2021-03-25 Dentsply Sirona Inc. Method, system and computer readable storage media for registering intraoral measurements

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182699A (en) * 2017-12-28 2018-06-19 北京天睿空间科技股份有限公司 Three-dimensional registration method based on two dimensional image local deformation
US20210090272A1 (en) * 2019-09-24 2021-03-25 Dentsply Sirona Inc. Method, system and computer readable storage media for registering intraoral measurements

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977162A (en) * 2023-09-25 2023-10-31 福建自贸试验区厦门片区Manteia数据科技有限公司 Image registration method and device, storage medium and electronic equipment
CN116977162B (en) * 2023-09-25 2024-01-19 福建自贸试验区厦门片区Manteia数据科技有限公司 Image registration method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN114140504B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
US10665013B2 (en) Method for single-image-based fully automatic three-dimensional hair modeling
CN105844706B (en) A kind of full-automatic three-dimensional scalp electroacupuncture method based on single image
CN110349247B (en) Indoor scene CAD three-dimensional reconstruction method based on semantic understanding
CN104376594B (en) Three-dimensional face modeling method and device
EP2206090B1 (en) Method and device for illustrating a virtual object in a real environment
Li et al. An overlapping-free leaf segmentation method for plant point clouds
CN102509357B (en) Pencil sketch simulating and drawing system based on brush stroke
Grabner et al. Location field descriptors: Single image 3d model retrieval in the wild
CN109658444B (en) Regular three-dimensional color point cloud registration method based on multi-modal features
CN107945285B (en) Three-dimensional model map changing and deforming method
CN106651900A (en) Three-dimensional modeling method of elevated in-situ strawberry based on contour segmentation
CN104376596A (en) Method for modeling and registering three-dimensional scene structures on basis of single image
CN104008547B (en) A kind of visualization sliced image of human body serializing dividing method based on skeleton angle point
WO2009016511A2 (en) Shape preserving mappings to a surface
CN115641322A (en) Robot grabbing method and system based on 6D pose estimation
CN114140504B (en) Three-dimensional interactive biomedical image registration method
CN112396655A (en) Point cloud data-based ship target 6D pose estimation method
Yang et al. Classification of 3D terracotta warriors fragments based on geospatial and texture information
CN117475170A (en) FPP-based high-precision point cloud registration method guided by local-global structure
CN117218192A (en) Weak texture object pose estimation method based on deep learning and synthetic data
Bhakar et al. A review on classifications of tracking systems in augmented reality
CN109345570A (en) A kind of multichannel three-dimensional colour point clouds method for registering based on geometry
Meyer et al. PEGASUS: Physically Enhanced Gaussian Splatting Simulation System for 6DOF Object Pose Dataset Generation
Xi et al. Consistent parameterization and statistical analysis of human head scans
Zimmermann et al. Sketching contours

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant