CN111292306A - Knee joint CT and MR image fusion method and device - Google Patents

Knee joint CT and MR image fusion method and device Download PDF

Info

Publication number
CN111292306A
CN111292306A CN202010079580.3A CN202010079580A CN111292306A CN 111292306 A CN111292306 A CN 111292306A CN 202010079580 A CN202010079580 A CN 202010079580A CN 111292306 A CN111292306 A CN 111292306A
Authority
CN
China
Prior art keywords
image
contour
knee joint
tibia
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010079580.3A
Other languages
Chinese (zh)
Inventor
王君臣
李维全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010079580.3A priority Critical patent/CN111292306A/en
Publication of CN111292306A publication Critical patent/CN111292306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a knee joint CT and MR image fusion method and device, wherein the method comprises the following steps: respectively extracting contour point clouds of the thighbone and the shinbone in the CT image and the MR image; registering the contour point clouds of the thighbone and the shinbone to obtain a registration result of the thighbone and the shinbone; and respectively carrying out knee joint CT and MR image fusion on the thigh and the shank according to the femur and tibia registration result to obtain a complete knee joint CT and MR fusion image. The method can realize the fusion process of the CT and MR images of the knee joint, provides more comprehensive anatomical structures of the knee joint for doctors, can play a positive role in the preoperative diagnosis of the knee joint replacement operation and the planning process of the operation path, and is simple and easy to realize.

Description

Knee joint CT and MR image fusion method and device
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to a method and an apparatus for fusing CT (computed tomography) and MR (Magnetic Resonance) images of a knee joint.
Background
Medical image fusion, which refers to that images collected by a multi-source image sensor and related to the same target are subjected to certain image processing, respective useful information is extracted, and finally the images are synthesized into the same image for observation or further processing. From the viewpoint of information theory, the fused image can have better performance than the sub-images composing the fused image, and the integrated overall information is larger than the sum of the partial information, that is, the fused result should contain more useful information than any input information source, namely 1+1>2, which is the fusion of the image information.
The process of medical image fusion is a progressive process, and different fusion methods have respective specific operations and processes, but regardless of the technical method, image fusion is generally completed through three main steps, namely image preprocessing, image registration and fused image creation.
The image preprocessing refers to processing of removing noise, enhancing contrast, segmenting a region of interest and the like on various acquired image data, unifies various data formats, image sizes and resolutions, and can perform re-tomosynthesis on conditional images to ensure that the images are approximate in spatial resolution and spatial orientation. On the basis, an appropriate mathematical model can be established according to target characteristics or different application purposes.
Image registration refers to seeking one or a series of spatial transformations for one medical image to bring it into spatial correspondence with corresponding points on another medical image. This correspondence means that the same anatomical point on the body has the same spatial position on the two matching images, and the result of the registration is such that all anatomical points, or at least all points of diagnostic interest and points of surgical interest, on the two images are matched. The image registration is a prerequisite and a key for image fusion, and the quality of the fusion result is directly determined by the image registration accuracy.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present invention is to provide a method for fusing CT and MR images of a knee joint, which can realize the fusion process of CT and MR images of the knee joint, provide a more comprehensive anatomical structure of the knee joint for a doctor, play an active role in the preoperative diagnosis of the knee joint replacement surgery and the surgical path planning process, and is simple and easy to implement.
The invention also aims to provide a knee joint CT and MR image fusion device.
In order to achieve the above object, an embodiment of the invention provides a knee joint CT and MR image fusion method, which includes the following steps: respectively extracting contour point clouds of the thighbone and the shinbone in the CT image and the MR image; registering the contour point clouds of the thighbone and the shinbone to obtain a registration result of the thighbone and the shinbone; and respectively carrying out knee joint CT and MR image fusion on the thigh and the shank according to the femur and tibia registration result to obtain a complete knee joint CT and MR fusion image.
According to the knee joint CT and MR image fusion method provided by the embodiment of the invention, the CT and MR image fusion process can be realized by extracting the contour of the femur and the tibia in the CT and MR image, correspondingly registering the point cloud between the extracted contours and generating the CT and MR fusion image, so that a more comprehensive knee joint anatomical structure can be provided for a doctor, a positive effect can be played in the processes of pre-operation diagnosis and operation path planning of the knee joint replacement operation, and the method is simple and easy to realize.
In addition, the knee joint CT and MR image fusion method according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the extracting a point cloud of contour of femur and tibia in the CT and MR images includes: performing threshold segmentation on the CT image to obtain contour information of the femur and the tibia; removing noise points in the model, and separating out a skeleton contour model meeting preset conditions; and repairing the defects in the segmented contour model by using a region growing method to obtain a femur and tibia contour model.
Further, in an embodiment of the present invention, the extracting a point cloud of contour of femur and tibia in the CT and MR images further includes: calculating the MR image gray level histogram, and estimating to obtain a threshold range; determining the COG gravity center and the initial radius of the icosahedron derivative model; and updating the model shape according to a derivative formula until the bone contour is divided by convergence.
Further, in an embodiment of the present invention, the registering between the contour point clouds of the femur and the tibia includes: extracting three-dimensional normal information of a point in the source point cloud, calculating fast histogram characteristics of the point by using the normal information, and performing the same processing on the target point cloud; searching a point which meets a preset condition with the fast histogram feature selected from the source point cloud in the target point cloud to obtain a matched point pair, and calculating the least square transformation between the matched point pairs as a registration result; and taking the registration result as the initial pose estimation of the ICP algorithm, and continuously iterating to convergence by using the ICP algorithm to obtain a final registration transformation result.
In order to achieve the above object, another embodiment of the present invention provides a knee joint CT and MR image fusion apparatus, including: the extraction module is used for respectively extracting contour point clouds of the thighbone and the tibia from the CT image and the MR image; the registration module is used for registering the contour point clouds of the thighbone and the shinbone to obtain a registration result of the thighbone and the shinbone; and the fusion module is used for respectively carrying out knee joint CT and MR image fusion on the thigh and the shank according to the femur and tibia registration result to obtain a complete knee joint CT and MR fusion image.
According to the knee joint CT and MR image fusion device provided by the embodiment of the invention, the CT and MR images can be fused by extracting the contour of the femur and the tibia in the CT and MR images, correspondingly registering the point cloud between the extracted contours and generating the CT and MR fusion image, so that a more comprehensive knee joint anatomical structure can be provided for a doctor, a positive effect can be played in the processes of pre-operation diagnosis and operation path planning of a knee joint replacement operation, and the CT and MR image fusion device is simple and easy to implement.
In addition, the knee joint CT and MR image fusion device according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the extraction module is further configured to perform threshold segmentation on the CT image to obtain contour information of the femur and the tibia, remove noise points in a model, separate a bone contour model meeting a preset condition, and repair defects in the segmented contour model by using a region growing method to obtain a contour model of the femur and the tibia.
Further, in an embodiment of the present invention, the extraction module is further configured to calculate the MR image gray histogram, estimate a threshold range, determine a COG center of gravity and an initial radius of an icosahedron derived model, and update a model shape according to a derived formula until a bone contour is segmented by convergence.
Further, in an embodiment of the present invention, the registration module is further configured to extract three-dimensional normal information of a midpoint of the source point cloud, calculate a fast histogram feature of the point using the normal information, perform the same processing on the target point cloud, obtain a matched point pair by searching for a point in the target point cloud that satisfies a preset condition with the fast histogram feature selected in the source point cloud, calculate a least square transformation between the matched point pair as a registration result, use the registration result as an initial pose estimation of an ICP algorithm, and continue to iterate to converge using the ICP algorithm to obtain a final registration transformation result.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flowchart of a knee joint CT and MR image fusion method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of thresholding a CT image according to an embodiment of the present invention;
FIG. 3 is a flow diagram of a derivative update process according to an embodiment of the invention;
FIG. 4 is a diagram illustrating parameters in a derivative update process according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a point cloud extraction result of the contour of the femur and tibia in the CT and MR images according to the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a knee joint CT and MR image fusion apparatus according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The method and the device for fusing the CT and MR images of the knee joint according to the embodiment of the present invention will be described below with reference to the accompanying drawings.
FIG. 1 is a flowchart of a knee joint CT and MR image fusion method according to an embodiment of the present invention.
As shown in fig. 1, the knee joint CT and MR image fusion method includes the following steps:
in step S101, a point cloud of the contour of the femur and the tibia in the CT image and the MR image is extracted, respectively.
It can be understood that the embodiment of the present invention first performs the extraction of the point cloud of the contour of the femur and the tibia in the CT and MR images.
In one embodiment of the present invention, extracting a point cloud of contour points of femur and tibia in CT and MR images comprises: performing threshold segmentation on the CT image to obtain contour information of the femur and the tibia; removing noise points in the model, and separating out a skeleton contour model meeting preset conditions; and repairing the defects in the segmented contour model by using a region growing method to obtain the contour model of the femur and the tibia.
It can be understood that, the knee joint image fusion method firstly needs to extract contour models of femur and tibia in two images, and in a CT image, because of high contrast between a bone imaging point and a soft tissue imaging point, a threshold segmentation method is adopted for contour extraction. The method comprises the following specific steps:
(1) as shown in fig. 2, the CT image is subjected to threshold segmentation to segment the approximate contours of the femur and tibia;
(2) removing noise points in the model and separating out a correct skeleton contour model;
(3) and repairing the defects in the segmented contour model by using a region growing method to obtain a complete and smooth contour model of the femur and the tibia.
Further, in an embodiment of the present invention, extracting a point cloud of contour points of femur and tibia in the CT and MR images further includes: calculating an MR image gray level histogram, and estimating to obtain a threshold range; determining the COG gravity center and the initial radius of the icosahedron derivative model; and updating the model shape according to a derivative formula until the bone contour is divided by convergence.
It is understood that in the MR image, the bone-to-soft tissue imaging contrast is not very obvious, and cannot be extracted by a simpler threshold segmentation method, but bet (brain extraction tool) is introduced into the knee joint image segmentation process for contour extraction. The method comprises the following specific steps:
(1) calculating a gray level histogram and roughly estimating a threshold range;
(2) determining the COG gravity center and the initial radius r of the icosahedron derivative model;
(3) and updating the model shape according to a derivative formula until the bone contour is divided by convergence.
The derivation updating process is shown in fig. 3, and specifically includes:
step 1: calculating f for each vertex x of the icosahedron derived model1、f2、f3
Step 2: judging whether I (x) is larger than a threshold value, if not, directly executing the step 3; if yes, let f3=-f3(ii) a Wherein, i (x) is the intensity of the pixel point adjacent to the current vertex in the normal direction of the contour.
And step 3: u-05 f1+snf2+0.05l f3n, and updating x to be x + u, wherein vk+1=vk+ukU is the contour evolution step size, v is the current vertex coordinate, and k is the current evolution time, the parameters in steps 1-3 will be further explained with reference to fig. 4:
f1i.e. s in fig. 4t,sn=(n·s)n,st=s-sn(ii) a Wherein, for evolution wheelNormal vector of the contour surface, s being the vector pointing from the current vertex to the average position of its neighbors, snI.e. the component of s in the normal vector direction of the evolving contour surface, stIs the corresponding tangential component;
Figure BDA0002379789500000041
Figure BDA0002379789500000042
f2the calculation formula is as above, wherein r is the estimated value of the local curvature of the current vertex; alpha and beta are empirical smooth parameters, and appropriate parameter values are selected according to actual conditions, wherein the parameter values are generally 1mm and 8mm respectively; l is the average value of the distance between the current vertex and the adjacent vertex;
Figure BDA0002379789500000043
t1=(Imax-t2)b1+t2
Imin=max(t2,min(tm,I(0),I(1),…,I(d))),
Imax=min(tm,max(t,I(0),I(1),…,I(d))),
t=0.9t2+0.1t98
f3the calculation formula is as above, where t2And t982% and 98% intensity of the brain MR pixel intensity histogram; t is tmIs the median intensity; i isminAnd ImaxIs an estimate of the minimum and maximum intensities of the pixel points that are adjacent to the current vertex in the direction of the contour normal (less than d pixels away from the current vertex), btThe empirical parameter is typically taken to be 0.5.
And 4, step 4: judging whether the profile evolution step u converges, namely judging whether the step u evolved each time reaches a certain preset lower limit, and if not, returning to the step 1; if yes, the segmentation is completed, and a final segmentation result is obtained. It should be noted that the process of deriving the contour needs to ensure that the contour is not self-crossed, otherwise, the position of the initial point needs to be changed and the contour evolution process needs to be performed again.
Further, the final segmentation result is shown in fig. 5, where the left side of fig. 5 is a CT image and the right side is an MR image.
In step S102, the contour point clouds of the femur and the tibia are registered to obtain a femur and tibia registration result.
In one embodiment of the present invention, registering contour point clouds of a femur and a tibia includes: extracting three-dimensional normal information of a point in the source point cloud, calculating fast histogram characteristics of the point by using the normal information, and performing the same processing on the target point cloud; searching points which meet preset conditions with the fast histogram characteristics selected from the source point cloud in the target point cloud to obtain matched point pairs, and calculating least square transformation between the matched point pairs as a registration result; and taking the registration result as the initial pose estimation of the ICP algorithm, and continuously iterating to convergence by using the ICP algorithm to obtain a final registration transformation result.
It can be understood that in rigid registration, the most classical algorithm is ICP (Iterative closest point) algorithm, but ICP relies on good initial pose estimation, i.e. it needs to be given a good input, e.g. initially the poses of two point clouds are very close, otherwise the ICP algorithm easily falls into local optimality and results are very bad. Therefore, instead of using the ICP algorithm directly, variations of the ICP algorithm are often used, or ICP is used in conjunction with other algorithms, as the case may be. The embodiment of the invention uses SAC-IA (sample consensus information, random sampling consistent registration algorithm) as coarse registration, and provides the result to ICP as a better Initial pose estimation.
Specifically, the femur and the tibia are rigid objects, so a rigid registration algorithm is adopted. The embodiment of the invention adopts a mode of 'coarse registration + fine registration', and adopts SAC-IA combined with ICP to complete registration.
The SAC-IA algorithm firstly extracts three-dimensional normal information of a Point in a source Point cloud, calculates FPFH (Fast Point Feature Histogram) characteristics of the Point by using the normal information, and then performs the same processing on a target Point cloud. And searching points which are approximate to the FPFH characteristics of some points selected in the source point cloud in the target point cloud to obtain matched point pairs, and calculating the least square transformation between the point pairs as a registration result. And then, taking the result as the initial pose estimation of the ICP algorithm, continuously iterating by using the ICP algorithm until convergence, and obtaining a final registration transformation result, namely two groups of rigid registrations.
According to the embodiment of the invention, the mapping relation between each point of the CT and each point of the MR image can be obtained by a method of coarse configuration and fine registration, and the image fusion process can be carried out according to the result.
In step S103, knee joint CT and MR images of the thigh and the calf are fused according to the femur and tibia registration result, respectively, so as to obtain a complete knee joint CT and MR fusion image.
It can be understood that, according to the femur tibia registration result obtained in step S102, knee joint CT and MR image fusion of the thigh and the calf are respectively performed, so as to obtain a complete knee joint CT and MR fusion image.
In summary, according to the knee joint CT and MR image fusion method provided by the embodiment of the invention, the CT and MR images can be fused by extracting the femoral and tibial contours in the CT and MR images, performing point cloud registration between the corresponding extracted contours, and generating the CT and MR fusion image, so that a more comprehensive knee joint anatomical structure can be provided for a doctor, and the method can play an active role in the processes of pre-operative diagnosis of the knee joint replacement operation and planning of the operation path, and is simple and easy to implement.
Next, a knee joint CT and MR image fusion apparatus according to an embodiment of the present invention will be described with reference to the drawings.
FIG. 6 is a schematic structural diagram of a knee joint CT and MR image fusion device according to an embodiment of the present invention.
As shown in fig. 6, the knee joint CT and MR image fusion apparatus 10 includes: an extraction module 100, a registration module 200 and a fusion module 300.
The extraction module 100 is configured to extract contour point clouds of a femur and a tibia from the CT image and the MR image, respectively; the registration module 200 is configured to register the contour point clouds of the femur and the tibia to obtain a femur and tibia registration result; the fusion module 300 is configured to perform knee joint CT and MR image fusion of the thigh and the calf respectively according to the femur and tibia registration result, so as to obtain a complete knee joint CT and MR fusion image. The device 10 of the embodiment of the invention can realize the fusion process of CT and MR images of the knee joint, provides more comprehensive anatomical structures of the knee joint for doctors, can play an active role in the preoperative diagnosis of the knee joint replacement operation and the planning process of the operation path, and is simple and easy to realize.
Further, in an embodiment of the present invention, the extraction module 100 is further configured to perform threshold segmentation on the CT image to obtain contour information of the femur and the tibia, remove noise points in the model, separate a bone contour model meeting a preset condition, and repair defects in the segmented contour model by using a region growing method to obtain contour models of the femur and the tibia.
Further, in an embodiment of the present invention, the extraction module 100 is further configured to calculate a gray histogram of the MR image, estimate a threshold range, determine a center of gravity of the COG and an initial radius of an icosahedron derived model, and update a model form according to a derived formula until the bone contour is segmented by convergence.
Further, in an embodiment of the present invention, the registration module 200 is further configured to extract three-dimensional normal information of a midpoint of the source point cloud, calculate a fast histogram feature of the point using the normal information, perform the same processing on the target point cloud, obtain a matched point pair by searching for a point in the target point cloud that satisfies a preset condition with the fast histogram feature selected in the source point cloud, calculate a least square transformation between the matched point pair as a registration result, use the registration result as an initial pose estimation of the ICP algorithm, and continue to iterate to converge using the ICP algorithm to obtain a final registration transformation result.
It should be noted that the above explanation of the embodiment of the method for fusing CT and MR images of a knee joint is also applicable to the device for fusing CT and MR images of a knee joint of this embodiment, and is not repeated here.
According to the knee joint CT and MR image fusion device provided by the embodiment of the invention, the CT and MR image fusion process can be realized by extracting the contour of the femur and the tibia in the CT and MR images, correspondingly registering the point cloud between the extracted contours and generating the CT and MR fusion image, so that a more comprehensive knee joint anatomical structure can be provided for a doctor, a positive effect can be played in the processes of pre-operation diagnosis and operation path planning of the knee joint replacement operation, and the implementation is simple and easy.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (8)

1. A knee joint CT and MR image fusion method is characterized by comprising the following steps:
respectively extracting contour point clouds of the thighbone and the shinbone in the CT image and the MR image;
registering the contour point clouds of the thighbone and the shinbone to obtain a registration result of the thighbone and the shinbone; and
and respectively carrying out knee joint CT and MR image fusion on the thigh and the shank according to the femur and tibia registration result to obtain a complete knee joint CT and MR fusion image.
2. The method of claim 1, wherein extracting a femoral and tibial contour point cloud in the CT and MR images comprises:
performing threshold segmentation on the CT image to obtain contour information of the femur and the tibia;
removing noise points in the model, and separating out a skeleton contour model meeting preset conditions;
and repairing the defects in the segmented contour model by using a region growing method to obtain a femur and tibia contour model.
3. The method of claim 2, wherein extracting a femoral and tibial contour point cloud in the CT and MR images further comprises:
calculating the MR image gray level histogram, and estimating to obtain a threshold range;
determining the COG gravity center and the initial radius of the icosahedron derivative model;
and updating the model shape according to a derivative formula until the bone contour is divided by convergence.
4. The method of claim 1, wherein registering the contour point clouds of the femur and the tibia comprises:
extracting three-dimensional normal information of a point in the source point cloud, calculating fast histogram characteristics of the point by using the normal information, and performing the same processing on the target point cloud;
searching a point which meets a preset condition with the fast histogram feature selected from the source point cloud in the target point cloud to obtain a matched point pair, and calculating the least square transformation between the matched point pairs as a registration result;
and taking the registration result as the initial pose estimation of the ICP algorithm, and continuously iterating to convergence by using the ICP algorithm to obtain a final registration transformation result.
5. A knee joint CT and MR image fusion device is characterized by comprising:
the extraction module is used for respectively extracting contour point clouds of the thighbone and the tibia from the CT image and the MR image;
the registration module is used for registering the contour point clouds of the thighbone and the shinbone to obtain a registration result of the thighbone and the shinbone; and
and the fusion module is used for respectively carrying out knee joint CT and MR image fusion on the thigh and the shank according to the femur and tibia registration result to obtain a complete knee joint CT and MR fusion image.
6. The device of claim 5, wherein the extraction module is further configured to perform threshold segmentation on the CT image to obtain contour information of the femur and the tibia, remove noise points in the model, separate a bone contour model satisfying a preset condition, and repair defects in the segmented contour model by using a region growing method to obtain a contour model of the femur and the tibia.
7. The apparatus of claim 6, wherein the extraction module is further configured to calculate the MR image gray histogram, estimate a threshold range, determine a COG centroid and an initial radius of an icosahedron derived model, and update a model shape according to a derived formula until the bone contour is segmented by convergence.
8. The method according to claim 1, wherein the registration module is further configured to extract three-dimensional normal information of a midpoint of the source point cloud, calculate a fast histogram feature of the point using the normal information, perform the same processing on the target point cloud, obtain a matched point pair by searching for a point in the target point cloud that satisfies a preset condition with the fast histogram feature selected in the source point cloud, calculate a least square transformation between the matched point pair as a registration result, use the registration result as an initial pose estimation of an ICP algorithm, and continue iteration to convergence using the ICP algorithm to obtain a final registration transformation result.
CN202010079580.3A 2020-02-04 2020-02-04 Knee joint CT and MR image fusion method and device Pending CN111292306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010079580.3A CN111292306A (en) 2020-02-04 2020-02-04 Knee joint CT and MR image fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010079580.3A CN111292306A (en) 2020-02-04 2020-02-04 Knee joint CT and MR image fusion method and device

Publications (1)

Publication Number Publication Date
CN111292306A true CN111292306A (en) 2020-06-16

Family

ID=71025527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010079580.3A Pending CN111292306A (en) 2020-02-04 2020-02-04 Knee joint CT and MR image fusion method and device

Country Status (1)

Country Link
CN (1) CN111292306A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763297A (en) * 2021-06-30 2021-12-07 安徽省立医院(中国科学技术大学附属第一医院) Acromioclavicular joint CT image processing method
CN113936100A (en) * 2021-10-12 2022-01-14 大连医科大学附属第二医院 Extraction and reconstruction method for human knee joint cruciate ligament insertion points
CN117670951A (en) * 2023-11-14 2024-03-08 北京长木谷医疗科技股份有限公司 Knee joint image registration method and device based on multi-mode cross attention mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102940530A (en) * 2012-11-16 2013-02-27 昆明医科大学第一附属医院 Method for virtually building anterior cruciate ligament on femur and tibia tunnels
CN105139442A (en) * 2015-07-23 2015-12-09 昆明医科大学第一附属医院 Method for establishing human knee joint three-dimensional simulation model in combination with CT (Computed Tomography) and MRI (Magnetic Resonance Imaging)
CN110353806A (en) * 2019-06-18 2019-10-22 北京航空航天大学 Augmented reality navigation methods and systems for the operation of minimally invasive total knee replacement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102940530A (en) * 2012-11-16 2013-02-27 昆明医科大学第一附属医院 Method for virtually building anterior cruciate ligament on femur and tibia tunnels
CN105139442A (en) * 2015-07-23 2015-12-09 昆明医科大学第一附属医院 Method for establishing human knee joint three-dimensional simulation model in combination with CT (Computed Tomography) and MRI (Magnetic Resonance Imaging)
CN110353806A (en) * 2019-06-18 2019-10-22 北京航空航天大学 Augmented reality navigation methods and systems for the operation of minimally invasive total knee replacement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨金柱 等: "基于改进BET的MRI脑组织自动提取算法", 《东北大学学报(自然科学版)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763297A (en) * 2021-06-30 2021-12-07 安徽省立医院(中国科学技术大学附属第一医院) Acromioclavicular joint CT image processing method
CN113936100A (en) * 2021-10-12 2022-01-14 大连医科大学附属第二医院 Extraction and reconstruction method for human knee joint cruciate ligament insertion points
CN117670951A (en) * 2023-11-14 2024-03-08 北京长木谷医疗科技股份有限公司 Knee joint image registration method and device based on multi-mode cross attention mechanism
CN117670951B (en) * 2023-11-14 2024-06-25 北京长木谷医疗科技股份有限公司 Knee joint image registration method and device based on multi-mode cross attention mechanism

Similar Documents

Publication Publication Date Title
CN107274389B (en) Femur and acetabulum anatomical parameter obtaining method based on CT three-dimensional sequence image
Subburaj et al. Automated identification of anatomical landmarks on 3D bone models reconstructed from CT scan images
Klinder et al. Automated model-based vertebra detection, identification, and segmentation in CT images
Baka et al. 2D–3D shape reconstruction of the distal femur from stereo X-ray imaging using statistical shape models
Almeida et al. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans
CN111292306A (en) Knee joint CT and MR image fusion method and device
Cheng et al. Automatic segmentation technique for acetabulum and femoral head in CT images
CN111462138B (en) Semi-automatic segmentation method and device for diseased hip joint image
Baka et al. Statistical shape model-based femur kinematics from biplane fluoroscopy
Cerveri et al. Automated method for computing the morphological and clinical parameters of the proximal femur using heuristic modeling techniques
US20040101184A1 (en) Automatic contouring of tissues in CT images
Jimenez-Carretero et al. Optimal multiresolution 3D level-set method for liver segmentation incorporating local curvature constraints
CN114332378A (en) Human skeleton three-dimensional model obtaining method and system based on two-dimensional medical image
Zheng et al. Statistical shape and deformation models based 2D–3D reconstruction
Scheys et al. Image based musculoskeletal modeling allows personalized biomechanical analysis of gait
CN108898601B (en) Femoral head image segmentation device and method based on random forest
Umadevi et al. Enhanced Segmentation Method for bone structure and diaphysis extraction from x-ray images
Ghanavati et al. Multi-slice to volume registration of ultrasound data to a statistical atlas of human pelvis
Pettersson et al. Automatic hip bone segmentation using non-rigid registration
Zheng Statistically deformable 2D/3D registration for accurate determination of post-operative cup orientation from single standard X-ray radiograph
Zheng et al. Unsupervised reconstruction of a patient-specific surface model of a proximal femur from calibrated fluoroscopic images
Ehrhardt et al. Atlas-based recognition of anatomical structures and landmarks to support the virtual three-dimensional planning of hip operations
Gamage et al. 3D reconstruction of patient specific bone models from 2D radiographs for image guided orthopedic surgery
Subburaj et al. 3D shape reasoning for identifying anatomical landmarks
O’Neill et al. Segmentation of cam-type femurs from CT scans

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200616

RJ01 Rejection of invention patent application after publication