CN116681719A - Digital knee joint cartilage segmentation method - Google Patents

Digital knee joint cartilage segmentation method Download PDF

Info

Publication number
CN116681719A
CN116681719A CN202310719680.1A CN202310719680A CN116681719A CN 116681719 A CN116681719 A CN 116681719A CN 202310719680 A CN202310719680 A CN 202310719680A CN 116681719 A CN116681719 A CN 116681719A
Authority
CN
China
Prior art keywords
image
mri
cartilage
mask
mri image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310719680.1A
Other languages
Chinese (zh)
Inventor
李映锡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia Ziyin Technology Co ltd
Original Assignee
Inner Mongolia Ziyin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia Ziyin Technology Co ltd filed Critical Inner Mongolia Ziyin Technology Co ltd
Priority to CN202310719680.1A priority Critical patent/CN116681719A/en
Publication of CN116681719A publication Critical patent/CN116681719A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application discloses a digital knee joint cartilage segmentation method, which specifically comprises the steps of importing medical images, selecting characteristic points, calculating a transformation matrix M, registering the medical images, calculating cartilage data, reconstructing cartilage three-dimensionally and the like. The method combines the identification advantages of CT and MRI on cortical bone (hard bone) and cartilage respectively, takes two medical images as data sources, reduces the reduction of segmentation precision caused by a single imaging principle, and improves the segmentation precision. The method firstly registers and processes the two-dimensional images, directly reconstructs the cartilage in three dimensions, and does not reconstruct CT and MRI images in three dimensions and then processes subtraction, thereby greatly reducing the data processing amount and processing time. The method obviously reduces manual subjective operation and improves the repeatability and reliability of the segmentation result.

Description

Digital knee joint cartilage segmentation method
Technical Field
The application relates to the field of digital medical treatment, in particular to a digital knee joint cartilage segmentation method.
Background
Three-dimensional model reconstruction is an indispensable ring in orthopaedics digital medical treatment. Typically, to better determine the surgical plan, the medical practitioner reconstructs a three-dimensional bone model of the relevant region of the patient from the medical image data. Preoperative planning is performed using the three-dimensional bone model, and design of the surgical guide and surgical implant is performed based on the three-dimensional bone model. Three-dimensional model reconstruction is therefore the most basic and critical step in the overall treatment regimen.
In knee joint disease treatment, it is often necessary to treat knee joint cartilage. Therefore, in the three-dimensional model reconstruction, it is inevitably necessary to segment cartilage tissue of the knee joint portion. In the prior art, the user (typically a medical-combined engineer) typically uses the different responses of cortical bone (harder) and cartilage to ionizing radiation or nuclear magnetic resonance based on a certain medical image, such as CT or MRI, to perform a three-dimensional model reconstruction of cartilage alone by segmenting the cartilage from cortical bone.
A typical prior art digital knee cartilage segmentation method generally comprises the steps of:
introducing a medical image (MRI or CT);
adjusting the window width and window level;
selecting a suitable threshold;
selecting a seed point;
performing region growing around the seed point;
performing three-dimensional reconstruction;
and (5) checking the quality of the three-dimensional model and repairing.
Both CT and MRI have certain drawbacks for hybrid imaging of cortical bone/cartilage due to the nature of the imaging principles. CT has lower sensitivity to soft tissues and low recognition accuracy. MRI, in contrast, has a high degree of recognition for soft tissues rich in moisture, while tissue recognition for cortical bone, which has a high mineral content and a low moisture content, is low. No accurate segmentation is possible regardless of the medical image used alone. Meanwhile, due to the fact that the steps of the existing segmentation method are too many and are required to be selected and set manually, segmentation results are different along with subjective operations of different operators, and reliability of the results is obviously reduced.
Therefore, those skilled in the art have been working to develop a digitized knee joint cartilage segmentation method, which combines the imaging characteristics of CT and MRI to solve the technical problems existing in the prior art.
Disclosure of Invention
In order to achieve the above purpose, the application provides a digital knee joint cartilage segmentation method, which specifically comprises the following steps:
step 1, importing medical images: respectively importing a CT image and an MRI image;
step 2, selecting characteristic points: selecting characteristic points on the CT image to obtain CT characteristic points, and selecting characteristic points on the MRI image to obtain MRI characteristic points;
step 3, calculating a transformation matrix M: calculating a transformation matrix M from the MRI image to the CT image according to the coordinates of the CT characteristic points and the MRI characteristic points;
step 4, registering medical images: re-slicing the CT image based on the MRI image according to the transformation matrix M to obtain a registered CT image corresponding to the MRI image;
step 5, calculating cartilage data: generating an MRI image mask based on the MRI image, generating a registration CT image mask based on the registration CT image, and performing two-dimensional Boolean subtraction operation on the MRI image mask and the registration CT image mask to obtain cartilage model data;
step 6, three-dimensional reconstruction of cartilage: and carrying out three-dimensional reconstruction based on the cartilage model data to obtain a cartilage three-dimensional model.
Further, in step 2, 4 CT feature points are selected from the CT image, and 4 MRI feature points corresponding to the positions are selected from the MRI image.
Further, the 4 corresponding positions are respectively: the posterior edge of the femoral condyle, the posterior edge of the tibial plateau, the anterior edge of the femoral sliding bone and the anterior edge of the tibial plateau.
Further, the step 4 specifically includes the following steps:
step 4.1, for the MRI image and the CT image, obtaining the position and the direction of the registered CT image based on the transformation matrix M;
step 4.2, setting the length, width and thickness of the registration CT image based on the length, width and thickness of the MRI image to obtain the corresponding registration CT image;
and 4.3, carrying out the operation on each MRI image to obtain the registration CT image of each MRI image.
Further, the generating logic of the CT image mask and the MRI image mask is that the mask value of the bone region is defined as 1, and the mask value of the non-bone region is defined as 0.
Further, the boolean subtraction operation specifically includes:
judging mask values of the matched CT images under the same coordinates for the coordinates with mask values of 1 in the MRI images:
if the mask value of the matched CT image is 1, the mask value is 0;
if the mask value of the matched CT image is 0, the mask value is 1.
Further, in step 4.2, the length and width of the MRI image are 512 pixels.
Further, in step 4.2, the thickness of the MRI image is 2 pixels.
Compared with the prior art, the technical scheme of the application has at least the following technical effects:
1. the technical scheme combines the identification advantages of CT and MRI on cortical bone (hard bone) and cartilage respectively, takes two medical images as data sources, reduces the reduction of segmentation precision caused by a single imaging principle, and improves the segmentation precision.
2. According to the technical scheme, the two-dimensional images are registered and processed, the cartilage is directly subjected to three-dimensional reconstruction, rather than respectively carrying out three-dimensional reconstruction on CT and MRI images and then carrying out subtraction processing, the data processing amount is greatly reduced, and the processing time is shortened.
3. Compared with the prior art, the technical scheme has the advantages that the manual operation is only carried out in the characteristic point selection step, the manual subjective operation is obviously reduced, and the repeatability and the reliability of the segmentation result are improved.
The conception, specific structure, and technical effects of the present application will be further described with reference to the accompanying drawings to fully understand the objects, features, and effects of the present application.
Drawings
FIG. 1 is a method flow diagram of one embodiment of the present application;
FIG. 2 is a schematic representation of an MRI image employed in one embodiment of the present application;
FIG. 3 is a schematic view of a CT image employed in one embodiment of the present application;
FIG. 4 is a schematic representation of a registered CT image versus an MRI image in an embodiment of the present application;
FIG. 5 is a schematic diagram of the effect of step 5 in one embodiment of the application;
FIG. 6 is a schematic diagram of a segmentation effect of an embodiment of the present application;
FIG. 7 is a schematic representation of a three-dimensional model of cartilage in accordance with one embodiment of the present application;
fig. 8 is a schematic representation of a three-dimensional model of cartilage in accordance with one embodiment of the present application.
Detailed Description
The following description of the preferred embodiments of the present application refers to the accompanying drawings, which make the technical contents thereof more clear and easy to understand. This application may be embodied in many different forms of embodiments and should not be construed as limited to the embodiments set forth herein. In the present application, the descriptions of "upper", "lower", "left", "right", "inner" and "outer" are descriptions according to the relative positions in the drawings, and are merely for structural description, not for limitation.
Examples
MRI and CT are medical images that are widely used in clinic. Due to the different imaging principles, MRI and CT respond differently to different tissues of the human body, thus resulting in different imaging definition and accuracy. In this embodiment, as shown in fig. 2 and 3, the MRI image and the CT image of the same region of the knee joint are shown, wherein fig. 2 is the MRI image and fig. 3 is the CT image. As can be readily seen from the figure, the MRI image is more sensitive to body tissue rich in moisture (e.g. muscle, blood vessels, nerves, etc.), and less sensitive to bones with little moisture (especially cortical bone), so that the surface of either the femur or tibia (typically one of the contours of cartilage) is not sharp in fig. 2. Unlike MRI, CT images are very sensitive to the response of hard tissues of the human body (e.g., bones), while the responsiveness to soft tissues is not high. Thus, as shown in fig. 3, in the CT image, the bone outline is quite clear, but the outline interface of the soft tissue is quite blurred. Thus, either MRI images or CT images alone are used as a basis for cartilage segmentation, or larger errors in bone contours (MRI images alone) or soft tissue contours (CT images alone) are produced.
In this embodiment, MRI images and CT images are used as the basis of cartilage segmentation, so that the advantages of the MRI images and the CT images can be combined, and the accuracy can be significantly improved. In theory, only a contour interface with higher precision in each image, such as a skeleton interface in a CT image and a soft tissue interface in an MRI image, is needed to be obtained, and the contour interface and the soft tissue interface are subjected to Boolean reduction operation, so that higher cartilage segmentation precision can be obtained.
However, difficulties are associated therewith. Although the MRI image is approximately the same as the region taken by the CT image. But are formed for each of the two based on the respective patient coordinate system shots. Even MRI images and CT images corresponding to the same region cannot be guaranteed to be uniform in their position, orientation, length, width, and height in the actual physical space (although they may be uniform in their respective coordinate systems), so that the boolean subtraction operation is not a precondition. Therefore, the MRI image and the CT image are first registered before this. In this embodiment, the CT images are preferably registered, that is, for each MRI image, the CT images with the same position, direction, length, width, and height of the corresponding region are found in the same coordinate system.
As shown in fig. 1, the specific method of this embodiment includes the following steps:
step 1, importing medical images: CT images and MRI images are respectively imported as shown in fig. 2 and 3.
Step 2, selecting characteristic points: and selecting characteristic points on the CT image to obtain CT characteristic points, and selecting characteristic points on the MRI image to obtain MRI characteristic points.
In the present embodiment, 4 feature points are selected on each of the MRI image shown in fig. 2 and the CT image shown in fig. 3. The positions of the feature points are as follows: the posterior edge of the femoral condyle, the posterior edge of the tibial plateau, the anterior edge of the femoral sliding bone and the anterior edge of the tibial plateau. As indicated by the white dots in fig. 2, 3.
Step 3, calculating a transformation matrix M: based on the coordinates of the CT feature points and the MRI feature points, a transformation matrix M from the MRI image to the CT image is calculated.
Step 4, registering medical images: and (3) re-slicing the CT image based on the MRI image according to the transformation matrix M to obtain a registration CT image corresponding to the MRI image.
In this embodiment, step 4 specifically includes:
and 4.1, for the MRI image and the CT image, obtaining the position and the direction of the registered CT image based on the transformation matrix M.
And 4.2, setting the length, the width and the thickness of the registration CT image based on the length, the width and the thickness of the MRI image to obtain a corresponding registration CT image. In this embodiment, the length and width of the MRI image are set to 512 pixels, and the thickness is set to 2 pixels.
And 4.3, carrying out the operation on each MRI image to obtain a registered CT image of each MRI image.
An example of a registered CT image obtained using the method of the present embodiment is shown in fig. 4. In fig. 4, the right is an MRI image and the left is a registered CT image of the corresponding position.
Step 5, calculating cartilage data: generating an MRI image mask based on the MRI image, generating a registration CT image mask based on the registration CT image, and performing two-dimensional Boolean subtraction operation on the MRI image mask and the registration CT image mask to obtain cartilage model data;
in this embodiment, step 5 specifically includes:
in the MRI image, the mask value of the bone region is set to 1, and the mask value of the non-bone region is set to 0. In the CT image, the mask value of the bone region is set to 1, and the mask value of the non-bone region is set to 0. This operation actually determines two contour interfaces of the cartilage region. As shown by the white line in fig. 5, the left side is a CT image, and the outline of the bone region defines the boundary of the cartilage region near the bone; to the right is the MRI image, the outer boundary of the cartilage region is determined.
For all coordinates with mask value of 1 in the MRI image, judging the mask value of the matched CT image under the same coordinates:
if the mask value of the matched CT image is 1, the mask value is 0;
if the mask value of the matching CT image is 0, the resulting mask value is 1.
As shown in fig. 6, after step 5, the overall outline of the cartilage region (the region surrounded by the white solid line) is obtained.
Step 6, three-dimensional reconstruction of cartilage: and carrying out three-dimensional reconstruction based on the cartilage model data to obtain a cartilage three-dimensional model.
Fig. 7 and 8 show a three-dimensional model of cartilage reconstructed by the method of this example. In the whole cartilage segmentation process, only manual operation exists in the step of selecting characteristic points, and other steps are automatically generated by a computer according to an algorithm, so that compared with the prior art, the artificial judgment and operation are remarkably reduced, and the segmentation result is more reliable and repeatable. In addition, before cartilage model data are obtained, the bone region or soft tissue region of the knee joint is not subjected to three-dimensional reconstruction, but data calculation and operation are directly carried out on two-dimensional images of MRI and CT, so that compared with the prior art, the data calculation amount is remarkably reduced, and the efficiency is improved.
The foregoing describes in detail preferred embodiments of the present application. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the application without requiring creative effort by one of ordinary skill in the art. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (8)

1. The digital knee joint cartilage segmentation method is characterized by comprising the following steps of:
step 1, importing medical images: respectively importing a CT image and an MRI image;
step 2, selecting characteristic points: selecting characteristic points on the CT image to obtain CT characteristic points, and selecting characteristic points on the MRI image to obtain MRI characteristic points;
step 3, calculating a transformation matrix M: calculating a transformation matrix M from the MRI image to the CT image according to the coordinates of the CT characteristic points and the MRI characteristic points;
step 4, registering medical images: re-slicing the CT image based on the MRI image according to the transformation matrix M to obtain a registered CT image corresponding to the MRI image;
step 5, calculating cartilage data: generating an MRI image mask based on the MRI image, generating a registration CT image mask based on the registration CT image, and performing two-dimensional Boolean subtraction operation on the MRI image mask and the registration CT image mask to obtain cartilage model data;
step 6, three-dimensional reconstruction of cartilage: and carrying out three-dimensional reconstruction based on the cartilage model data to obtain a cartilage three-dimensional model.
2. The method of claim 1, wherein in step 2, 4 CT feature points are selected from the CT image, and 4 MRI feature points are selected from the MRI image at corresponding positions.
3. The digitized knee cartilage segmentation method of claim 2 wherein the 4 corresponding locations are each: the posterior edge of the femoral condyle, the posterior edge of the tibial plateau, the anterior edge of the femoral sliding bone and the anterior edge of the tibial plateau.
4. The method for digitized knee cartilage segmentation of claim 3 wherein step 4 comprises the steps of:
step 4.1, for the MRI image and the CT image, obtaining the position and the direction of the registered CT image based on the transformation matrix M;
step 4.2, setting the length, width and thickness of the registration CT image based on the length, width and thickness of the MRI image to obtain the corresponding registration CT image;
and 4.3, carrying out the operation on each MRI image to obtain the registration CT image of each MRI image.
5. The method of claim 4, wherein in step 5, the generating logic of the CT image mask and the MRI image mask is to define a mask value of 1 for a bone region and define a mask value of 0 for a non-bone region.
6. The method for digitized knee cartilage segmentation of claim 5 wherein in step 5, said boolean subtraction operation comprises:
judging mask values of the matched CT images under the same coordinates for the coordinates with mask values of 1 in the MRI images:
if the mask value of the matched CT image is 1, the mask value is 0;
if the mask value of the matched CT image is 0, the mask value is 1.
7. The method of digitized knee cartilage segmentation of claim 6 wherein in step 4.2, the MRI image is 512 pixels in length and width.
8. The method of digitized knee cartilage segmentation of claim 7 wherein in step 4.2, the MRI image is 2 pixels thick.
CN202310719680.1A 2023-06-17 2023-06-17 Digital knee joint cartilage segmentation method Pending CN116681719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310719680.1A CN116681719A (en) 2023-06-17 2023-06-17 Digital knee joint cartilage segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310719680.1A CN116681719A (en) 2023-06-17 2023-06-17 Digital knee joint cartilage segmentation method

Publications (1)

Publication Number Publication Date
CN116681719A true CN116681719A (en) 2023-09-01

Family

ID=87783554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310719680.1A Pending CN116681719A (en) 2023-06-17 2023-06-17 Digital knee joint cartilage segmentation method

Country Status (1)

Country Link
CN (1) CN116681719A (en)

Similar Documents

Publication Publication Date Title
CN108765417B (en) Femur X-ray film generating system and method based on deep learning and digital reconstruction radiographic image
US10582934B2 (en) Generating MRI images usable for the creation of 3D bone models employed to make customized arthroplasty jigs
EP2603136B1 (en) Detection of anatomical landmarks
EP1319217B1 (en) Technique for manipulating medical images
US8898043B2 (en) Customised surgical apparatus
White et al. Accuracy of MRI vs CT imaging with particular reference to patient specific templates for total knee replacement surgery
CN109472835B (en) Method for processing medical image data and image processing system for medical image data
AU2020101836A4 (en) A method for generating femoral x-ray films based on deep learning and digital reconstruction of radiological image
CN111915696B (en) Three-dimensional image data-aided low-dose scanning data reconstruction method and electronic medium
CN112509022A (en) Non-calibration object registration method for preoperative three-dimensional image and intraoperative perspective image
JP2005287813A (en) Optimal shape search system for artificial medical material
Messmer et al. Volumetric model determination of the tibia based on 2D radiographs using a 2D/3D database
Irwansyah et al. Algorithm for segmentation and reduction of fractured bones in computer-aided preoperative surgery
CN114332378A (en) Human skeleton three-dimensional model obtaining method and system based on two-dimensional medical image
Sutherland et al. Use of general purpose mechanical computer assisted engineering software in orthopaedic surgical planning: advantages and limitations
CN115300809B (en) Image processing method and device, computer equipment and storage medium
CN116681719A (en) Digital knee joint cartilage segmentation method
EP1693798B1 (en) Determination of the femoral shaft axis and the femoral neck axis and its 3D reconstruction
Maintier et al. Tibial and femoral bones segmentation on CT-scans: a deep learning approach
US20220301163A1 (en) Deep learning based medical system and method for image acquisition
CN107408301B (en) Segmentation of objects in image data using channel detection
CN115294264A (en) Orthopedic operation guide plate design method
Wojciechowski et al. Automated measurement of parameters related to the deformities of lower limbs based on x-rays images
Gooßen et al. Automatic joint alignment measurements in pre-and post-operative long leg standing radiographs
Pourtabib Error Analysis of Methods for Determining Tibiofemoral Kinematics in the Native Knee and After Total Knee Replacement: An in vivo Study Using Single-Plane Fluoroscopy During Weight-Bearing Deep Knee Bend

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination