WO2022042459A1 - 基于深度学习的全膝关节置换术的术前规划方法、系统和介质 - Google Patents

基于深度学习的全膝关节置换术的术前规划方法、系统和介质 Download PDF

Info

Publication number
WO2022042459A1
WO2022042459A1 PCT/CN2021/113946 CN2021113946W WO2022042459A1 WO 2022042459 A1 WO2022042459 A1 WO 2022042459A1 CN 2021113946 W CN2021113946 W CN 2021113946W WO 2022042459 A1 WO2022042459 A1 WO 2022042459A1
Authority
WO
WIPO (PCT)
Prior art keywords
key
femoral
prosthesis
tibial
femur
Prior art date
Application number
PCT/CN2021/113946
Other languages
English (en)
French (fr)
Inventor
刘星宇
张逸凌
Original Assignee
张逸凌
北京长木谷医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 张逸凌, 北京长木谷医疗科技有限公司 filed Critical 张逸凌
Publication of WO2022042459A1 publication Critical patent/WO2022042459A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides

Definitions

  • the present disclosure relates to the field of medical technology, and in particular, to a preoperative planning method, system and medium for total knee arthroplasty based on deep learning.
  • the knee joint is the main weight-bearing joint of the whole body. It is one of the parts that are prone to injury due to long-term weight-bearing and a large amount of exercise. In addition, the aging of the current social population continues to increase, and these factors make the incidence of knee joint diseases high. At present, most of the segmentation methods for bone and joint CT images at home and abroad require manual positioning or manual segmentation in each CT image, which is time-consuming, labor-intensive, and inefficient.
  • TKA Total Knee Arthroplasty
  • the purpose of the present disclosure is to provide a preoperative planning method, system and medium for total knee arthroplasty based on deep learning, so as to realize automatic segmentation of bone fragments or key axes, key anatomical sites and key points in total knee arthroplasty Identification measurement of anatomical parameters.
  • a first aspect of the present disclosure provides a deep learning-based preoperative planning method for total knee arthroplasty, the method is based on medical image data of a patient's lower extremities, and the method includes:
  • a three-dimensional image of the skeletal structure is obtained through the medical image data processing, and key axes, key anatomical sites and key anatomical parameters are identified and marked;
  • the skeletal structure includes femur, tibia, fibula and the patella;
  • the key axes include the femoral anatomical axis, the femoral mechanical axis, the tibial anatomical axis and the tibial mechanical axis;
  • the key anatomical parameters include the tibiofemoral angle and the distal femoral angle;
  • the 3D prosthesis is simulated and matched with the 3D femur and the 3D tibia, and the simulated matching effect is observed in real time; when the simulated matching effect meets the surgical requirements, the simulated matching is deemed to be completed.
  • the steps of medical image data processing include the steps of reconstructing three-dimensional images of bones; the steps of image segmentation; the steps of identifying and marking key axes, key anatomical sites and key anatomical parameters.
  • the steps of medical image data processing include the steps of 3D image reconstruction of bones; the step of image segmentation; the steps of identifying and marking key axes, key anatomical sites and key anatomical parameters based on deep learning.
  • the steps of medical image data processing include the steps of reconstructing 3D images of bones; the steps of image segmentation based on deep learning; the steps of identifying and marking key axes, key anatomical sites and key anatomical parameters based on deep learning.
  • the image segmentation is performed based on deep learning, and the step of image segmentation includes:
  • Build a lower extremity medical image database acquire a lower extremity medical image data set, manually label the femur, tibia, fibula and patella regions; divide the data set into a training set and a test set; convert the unlabeled medical image data into the first format pictures and save them, convert the marked data into pictures in the second format and save them;
  • a segmentation neural network model is established; the segmentation neural network model includes a coarse segmentation neural network and a precise segmentation application network; the coarse segmentation neural network is used as a backbone network to perform rough segmentation, and the precise segmentation neural network is based on the rough segmentation to perform accurate segmentation
  • the coarse segmentation neural network is selected from FCN, SegNet, Unet, 3D-Unet, Mask-RCNN, hole convolution, ENet, CRFasRNN, PSPNet, ParseNet, RefineNet, ReSeg, LSTM-CF, DeepMask, DeepLabV1, DeepLabV2, DeepLabV3 At least one in; Described accurate segmentation neural network is at least one in EfficientDet, SimCLR, PointRend;
  • Model training the segmentation neural network model is trained with the training set and tested with the test set;
  • the coarse segmentation neural network adopts Unet neural network
  • the Unet neural network includes n upsampling layers and n downsampling layers;
  • Each upsampling layer includes an upsampling operation layer and a convolutional layer
  • Each downsampling layer includes convolutional layers and pooling layers.
  • n may be 2-8, 3-6, or 4-5.
  • each upsampling layer includes 1 upsampling operation layer and 2 convolutional layers, wherein the size of the convolution kernel in the convolutional layer is 3*3, and the size of the convolution kernel in the upsampling operation layer is 2*2, the number of convolution kernels in each upsampling layer is 512, 256, 256, 128.
  • each downsampling layer includes 2 convolution layers and 1 pooling layer, where the size of the convolution kernel in the convolution layer is 3*3, and the size of the convolution kernel in the pooling layer is 2* 2.
  • the number of convolution kernels in each convolution layer is 128, 256, 256, 512.
  • the data set is divided into a training set and a test set according to a ratio of 7:3.
  • the method further includes performing at least one of the following operations:
  • the dropout rate is set to 0.5-0.7; all the convolutional layers are followed by an activation layer, and the activation function used by the activation layer is the relu function.
  • the training is performed as follows:
  • Coarse segmentation The training set is sent to the coarse segmentation neural network for training; during the training process, the background pixel value of the data label is set to 0, the femur is 1, the tibia is 2, the fibula is 3, and the patella is 4.
  • the training batch size batch_size is 6, the learning rate is set to 1e-4, the optimizer uses the Adam optimizer, and the loss function used is DICE loss; optionally, according to the change of the loss function during the training process, adjust the training batch size;
  • Precise segmentation It is sent to the precise segmentation neural network for precise segmentation; the initial process includes first using bilinear interpolation to upsample the prediction results of the coarse segmentation, and then selecting multiple points in the feature map whose confidence is the preset confidence. Then, the feature representation of multiple points is calculated by bilinear interpolation and the labels to which the points belong are predicted; the initial process is repeated until the confidence level of the prediction result from upsampling reaches the target confidence level.
  • a point with a confidence level of 0.5 is selected as a point with a preset confidence level.
  • the lower extremity medical image data is CT scan data.
  • identifying and marking key axes, key anatomical sites and key anatomical parameters is performed based on deep learning, and the step includes:
  • Identify key anatomical sites identify using at least one of MTCNN, locnet, Pyramid Residual Module, Densenet, hourglass, resnet, SegNet, Unet, R-CNN, Fast R-CNN, Faster R-CNN, R-FCN, SSD Neural network models identify key anatomical sites;
  • the step of identifying key anatomical sites includes:
  • Build a database acquire a medical image dataset of lower limbs, and manually calibrate key anatomical sites; divide the dataset into a training set and a test set, which can be divided according to the ratio of 7:3.
  • Model training use the training set to train the recognition neural network model, and use the test set to test;
  • the obtaining the key axis by using the key anatomical site includes:
  • femoral anatomical axis it is obtained by fitting the center points on different levels of the femoral medullary canal;
  • tibial anatomical axis it is obtained by fitting the center points on different levels of the tibial medullary cavity;
  • the method of the fitting is any one of least squares, gradient descent, Gauss-Newton, Column-Ma algorithm;
  • the two endpoints are determined.
  • the three-dimensional prosthesis comprises a three-dimensional femoral prosthesis and a three-dimensional tibial prosthesis; further comprising a tibial pad; and
  • the analog matching includes:
  • Implantation of prosthesis implanting the 3D femoral prosthesis into the femur and implanting the 3D tibial prosthesis into the tibia; also including implanting the tibial pad into the prosthesis space;
  • Prosthesis selection select three-dimensional femoral prosthesis and three-dimensional tibial prosthesis, and select simulated surgical conditions;
  • Simulated osteotomy intelligent osteotomy according to the matching relationship between the 3D prosthesis and the bone, and observe the simulated matching effect of the 3D prosthesis and the bone;
  • the step of selecting a prosthesis includes at least one of the following steps of selecting a three-dimensional femoral prosthesis, selecting a three-dimensional tibial prosthesis, and selecting a simulated surgical condition:
  • selecting a three-dimensional femoral prosthesis includes selecting at least one of a femoral prosthesis type, a femoral prosthesis model, and a three-dimensional spatial position of the femoral prosthesis;
  • selecting a three-dimensional tibial prosthesis includes selecting at least one of a tibial prosthesis type, a tibial prosthesis model, and a three-dimensional space position;
  • selecting simulated surgical conditions includes selecting at least one of femoral surgical parameters and tibial surgical parameters;
  • the femoral surgical parameters include the amount of osteotomy of the distal femur, the amount of osteotomy of the posterior femoral condyle, the internal and external rotation angle , valgus angle and flexion angle of femoral prosthesis;
  • the tibial surgical parameters include tibial osteotomy, internal and external rotation angle, valgus angle and retroversion angle.
  • At least one skeletal structure is displayed, and at least one of the following operation modes is performed:
  • the transparency includes both transparent and opaque.
  • the key anatomical sites also include the concave point of the medial femoral condyle, the highest point of the lateral femoral condyle, the lowest point of the medial and posterior condyle of the femur, the medial low point and lateral high point of the tibial plateau, the midpoint of the posterior cruciate ligament and the tibial tubercle.
  • the key axis also includes at least one of the transcondylar line, the posterior condyle line, the tibia-knee joint line, the femoral sagittal axis, and the femoral-knee joint line;
  • the key anatomical parameters also include the posterior femoral condyle angle.
  • the key axis is marked in a state where the transparency is opaque
  • three-dimensional images and two-dimensional images of the skeletal structure are obtained through the medical image data processing; the two-dimensional images include cross-sectional images, sagittal images, and coronal images; cross-sectional images, sagittal images, and Three-axis linkage of coronal images.
  • the femur is displayed independently. and at least one of the tibia, adjust the observation angle of at least one of the femur and the tibia, and then perform manual marking of at least one of the key axis and the key anatomical site.
  • the method further includes:
  • a second aspect of the present disclosure provides a deep learning-based preoperative planning system for total knee replacement, the system comprising:
  • the medical image data processing module is configured to obtain a three-dimensional image of the skeletal structure through medical image data processing, identify and mark key axes, key anatomical sites and key anatomical parameters;
  • the skeletal structures include femur, tibia, fibula and patella; all
  • the key anatomical sites include the center points on different levels of the femoral medullary cavity, the center points on different levels of the tibial medullary cavity, the center point of the hip joint, the center point of the knee joint, the center point of the intercondylar spine, and the center point of the ankle joint;
  • the key axes include femoral anatomical axis, femoral mechanical axis, tibial anatomical axis and tibial mechanical axis;
  • the key anatomical parameters include tibiofemoral angle and distal femoral angle;
  • a simulation matching module configured to simulate the matching of the 3D prosthesis with the 3D femur and the 3D tibia, and observe the simulation matching effect in real time
  • Display Module Configured to display 3D images of skeletal structures, key axes, key anatomical sites, key anatomical parameters, and simulation matching effects.
  • the medical image data processing module includes:
  • a three-dimensional reconstruction unit configured to obtain a three-dimensional image of the skeletal structure
  • an image segmentation unit configured to segment the femur, tibia, fibula and patella
  • An identification marking unit configured to identify and mark key axes, key anatomical sites and key anatomical parameters.
  • system further includes:
  • an image combination module configured to arbitrarily combine skeletal structures
  • the image transparency switching module is configured to switch the transparency of the skeletal structure
  • an image scaling module configured to scale at least one of a three-dimensional image and a two-dimensional image of the skeletal structure
  • an image rotation module configured to rotate the image according to any axis
  • the image moving module is configured to move the image.
  • system further includes:
  • a third aspect of the present disclosure provides an apparatus, comprising:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the preoperative deep learning-based total knee arthroplasty according to any one of the first aspects planning method.
  • a fourth aspect of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the deep learning-based total knee arthroplasty described in any one of the first aspect method of preoperative planning.
  • the preoperative planning method and system for total knee arthroplasty based on deep learning realize automatic segmentation of femur, tibia, fibula and patella based on deep learning.
  • the present disclosure improves segmentation efficiency and accuracy.
  • the methods and systems provided by the present disclosure realize automatic identification and measurement of key axes and key anatomical parameters based on deep learning.
  • the deep learning-based preoperative planning system for total knee arthroplasty provided by the present disclosure is intelligent and efficient, the doctor has a short learning time, and can be mastered without long-term, large-scale surgery training; moreover, the cost is low and complex equipment is not required.
  • the size and position of the implanted prosthesis can be determined before surgery, and whether the prosthesis can meet the performance requirements can be virtually tested, so as to optimize the joint Facial reconstruction and the determination of prosthesis position; provide technical support for doctors to make surgery more accurate and safer; promote the development of surgery in the direction of intelligence, precision and minimal invasion.
  • FIG. 1 schematically shows a flowchart of a preoperative planning method for total knee arthroplasty based on deep learning provided by the present disclosure
  • FIG. 2 schematically shows a block diagram of a preoperative planning system for total knee arthroplasty based on deep learning provided by the present disclosure
  • Fig. 3 is a three-dimensional image displayed by the combination of four types of skeletal structures after segmentation, a and b are three-dimensional images at different angles respectively;
  • Figure 4 is a three-dimensional image of the femur when only the femur is displayed, a and b are the three-dimensional images at different angles;
  • Figure 5 is a three-dimensional image of the tibia when only the tibia is displayed, and a and b are three-dimensional images at different angles;
  • Figure 6 is an enlarged three-dimensional image of the tibial plateau
  • Fig. 7 is the result graph after marking the key axis
  • Fig. 8 is the interface of simulation matching before osteotomy (the imaging effect is transparent);
  • Fig. 9 is the interface of simulation matching after osteotomy (the imaging effect is opaque);
  • Figure 10 is the image at different angles, a is the femur, b is the tibia;
  • Figure 11 is the result of the postoperative simulation
  • FIG. 12 schematically shows a structural diagram of the device provided by the present disclosure.
  • 101 Medical image data processing module
  • 201 Simulation matching module
  • 301 Display module
  • 401 Data import module
  • 501 Visual postoperative simulation module.
  • the present disclosure provides a deep learning-based preoperative planning method for total knee arthroplasty.
  • the method is based on medical image data of a patient's lower extremities. Referring to FIG. 1 , the method provided by the present disclosure includes the following steps:
  • the step of deep learning-based medical image data processing obtaining a three-dimensional image of the skeletal structure through the medical image data processing, identifying and marking key axes, key anatomical sites and key anatomical parameters;
  • the skeletal structure includes femur, tibia , fibula and patella;
  • the key axes include the femoral anatomical axis, the femoral mechanical axis, the tibial anatomical axis and the tibial mechanical axis;
  • the key anatomical sites include the center points on different levels of the femoral medullary canal, different levels of the tibial medullary canal upper center point, hip joint center point, knee joint center point, intercondylar spine center point and ankle joint center point;
  • the key anatomical parameters include tibiofemoral angle and distal femoral angle;
  • the step of visual simulation matching simulate matching of the 3D prosthesis with the 3D femur and the 3D tibia, and observe the simulation matching effect in real time; when the simulation matching effect meets the surgical requirements, the simulation matching is deemed to be completed.
  • the preoperative planning method for total knee arthroplasty based on deep learning realizes automatic segmentation of femur, tibia, fibula and patella based on deep learning, and improves the efficiency and accuracy of segmentation. Moreover, the method provided by the present disclosure realizes automatic identification and measurement of key axes and key anatomical parameters based on deep learning.
  • the method provided by the present disclosure is intelligent and efficient, the doctor has a short learning time, and can be mastered without long-term and large-scale operation training; moreover, the cost is low, and complex equipment is not required.
  • the size and position of the implanted prosthesis can be determined before surgery, and whether the prosthesis can meet the performance requirements can be virtually tested, so as to optimize the reconstruction of the articular surface and the determination of the position of the prosthesis; provide technical support for doctors, Make surgery more accurate and safer; promote the development of surgery in the direction of intelligence, precision, and minimally invasiveness.
  • the steps of medical image data processing include a step of 3D image reconstruction of bones; a step of image segmentation; and a step of identifying and marking key axes, key anatomical sites and key anatomical parameters.
  • the present disclosure does not limit the order of the three steps included in the medical image data processing steps.
  • three-dimensional image reconstruction can be performed first, followed by segmentation and identification and marking, or segmentation can be performed first, followed by three-dimensional image reconstruction and identification and marking. List the description.
  • At least the femoral anatomical axis, the femoral mechanical axis, the tibial anatomical axis and the tibial mechanical axis on the femur and the tibia are identified and marked through the step of identifying the marking, and at least the key anatomical parameters of the tibiofemoral angle and the distal femoral angle are obtained.
  • AI image segmentation and/or AI identifying and marking key axes, key anatomical sites and key anatomical parameters can be implemented through deep learning technology.
  • the image segmentation is performed based on deep learning, and the step of image segmentation includes:
  • Build a lower extremity medical image database Obtain a lower extremity medical image dataset, and manually mark the femur, tibia, fibula, and patella regions; divide the dataset into a training set and a test set, which can be divided according to the ratio of 7:3; Convert the pre-labeled medical image data (such as two-dimensional cross-sectional image data in dicom format) into pictures in the first format (such as jpg format) and save them, and convert the labeled data into pictures in the second format (such as png format) and save; the first format and the second format are not the same;
  • pre-labeled medical image data such as two-dimensional cross-sectional image data in dicom format
  • the first format such as jpg format
  • the second format such as png format
  • Model training the segmentation neural network model is trained with the training set and tested with the test set;
  • the segmentation neural network model includes a cascaded coarse segmentation neural network and a precise segmentation application network; the coarse segmentation neural network acts as a backbone network to perform coarse segmentation, and the precise segmentation neural network performs precise segmentation based on the coarse segmentation.
  • Segmentation the coarse segmentation neural network is selected from at least one of FCN, SegNet, Unet, 3D-Unet, Mask-RCNN, hole convolution, ENet, CRFasRNN, PSPNet, ParseNet, RefineNet, ReSeg, LSTM-CF, DeepMask ;
  • the precise segmentation neural network is at least one of EfficientDet, SimCLR, and PointRend.
  • the Unet neural network includes n up-sampling layers and n down-sampling layers; each up-sampling layer includes an up-sampling operation layer and a convolution layer; and each down-sampling layer includes a convolution layer and a pooling layer.
  • the value of n can be 2-8, 3-6, or 4-5.
  • Each upsampling layer includes 1 upsampling operation layer and 2 convolutional layers.
  • the size of the convolution kernel in the convolutional layer is 3*3, and the size of the convolution kernel in the upsampling operation layer is 2*2.
  • the number of convolution kernels in each upsampling layer is 512, 256, 256, 128.
  • Each downsampling layer includes 2 convolution layers and 1 pooling layer.
  • the size of the convolution kernel in the convolution layer is 3*3, and the size of the convolution kernel in the pooling layer is 2*2.
  • Each The number of convolution kernels in the convolution layer is 128, 256, 256, 512.
  • the method further includes performing at least one of the following operations:
  • the training is performed as follows:
  • Coarse segmentation During the training process, all the training sets are sent to the Unet neural network for training; during the training process, the background pixel value of the data label is set to 0, the femur is 1, the tibia is 2, the fibula is 3, and the patella is 4.
  • the batch size batch_size is 6, the learning rate is set to 1e-4, the optimizer uses the Adam optimizer, and the loss function used is DICE loss.
  • the training batch size can be adjusted according to the change of the loss function during the training process;
  • Precise segmentation After the rough segmentation is completed, it is sent to the PointRend neural network for precise segmentation; the initial process includes first using bilinear interpolation to upsample the prediction results of the rough segmentation, and then selecting multiple confidence levels in the feature map as preset confidence levels. Then, the feature representation of multiple points is calculated by Bilinear interpolation and the labels to which the points belong are predicted; the initial process is repeated until the confidence level of the up-sampled prediction result reaches the target confidence level.
  • a point with a confidence level of 0.5 is selected as a point with a preset confidence level.
  • the step of identifying markers based on deep learning includes:
  • the key anatomical sites to be identified in the present disclosure include the center points on different levels of the femoral medullary canal, the center points on different levels of the tibial medullary canal, the center point of the hip joint, the center point of the knee joint, the center point of the intercondylar spine, and the center point of the ankle
  • the joint center point in some embodiments, also includes the femoral medial condyle depression, the lateral femoral condyle apex, the femoral medial and posterior condyle nadir, the medial low and lateral high points of the tibial plateau, the posterior cruciate ligament midpoint, and the tibial tubercle The medial border point, the lowest point of the distal femur, etc.
  • Steps to identify key anatomical sites include:
  • Build a database acquire a medical image dataset of lower limbs, and manually calibrate key anatomical sites; divide the dataset into a training set and a test set, which can be divided according to the ratio of 7:3.
  • the recognition neural network model is MTCNN, locnet, Pyramid Residual Module, Densenet, hourglass, resnet, SegNet, Unet, R-CNN, Fast R-CNN, Faster R-CNN, R-FCN , at least one of SSD.
  • the Conv layer and the Max Pooling layer are used to scale the resolution of the feature
  • the network is bifurcated, and the upper and lower two channels perform convolution operations in different scale spaces to extract features;
  • the network After obtaining the lowest resolution feature, the network starts upsampling, and gradually combines feature information of different scales; for lower resolution, the nearest neighbor upsampling method can be used to add two different feature sets element by element;
  • the entire hourglass is symmetric. For each network layer in the process of acquiring low-resolution features, there will be a corresponding network layer in the process of upsampling;
  • the hourglass network module output is obtained, two consecutive 1 ⁇ 1Conv layers are used for processing to obtain the final network output; the output is a collection of heatmaps, each heatmap representing the probability of key points existing at each pixel.
  • Model training Use the training set to train the recognition neural network model, and use the test set to test.
  • the detection of multiple points outputs feature maps of multiple channels; the network uses Adam optimization, the learning rate is 1e-5, the batch size is 4, and the loss function uses L2 regularization
  • the training batch size can be adjusted according to the change of the loss function during the training process, and the coordinate values of the key points can be obtained.
  • the fitting method can be any one of least squares, gradient descent, Gauss-Newton, and Column-Ma algorithm.
  • two endpoints can be used to obtain them.
  • the two end points of the femoral mechanical axis - the hip center point and the knee joint center point - have been identified, and the femoral mechanical axis can be determined by these two points.
  • the key anatomical parameters that can be automatically measured in this step include the tibiofemoral angle, the distal femoral angle, and the posterior femoral condyle angle can also be automatically measured.
  • the present disclosure can obtain not only a three-dimensional image of the skeletal structure, but also a two-dimensional image through the medical image data processing;
  • the lateral image and coronal image can be linked in three axes.
  • the three-dimensional images of the skeletal structure obtained through medical image data processing can be combined arbitrarily, thereby realizing flexible and diverse display modes of the skeletal structure.
  • Displayed include any of the following: femur only; tibia only; fibula only; patella only; both femur and tibia; both femur and fibula; both femur and patella; both tibia and fibula; both Tibia and Patella are shown; fibula and patella are also shown; femur, tibia and fibula are also shown; femur, tibia and patella are also shown; femur, fibula and patella are also shown; tibia, fibula and patella are also shown; and patella.
  • the three-dimensional image of the skeletal structure obtained through medical image data processing can be transformed with transparency, so that the image exhibits various imaging effects.
  • transparency can be toggled between transparent and opaque.
  • the imaging effect of the femur can be selected to be transparent or opaque.
  • the imaging effect of the tibia can be selected to be transparent or opaque.
  • the visualization effect of the two types of bones can be selected to be transparent or opaque.
  • the visualization effect of the two types of bones can be selected to be transparent or opaque.
  • the bone when displaying the femur, tibia, and fibula at the same time, you can choose to be transparent or opaque for the visualization of the three types of bones.
  • the bone when displaying the femur, tibia, fibula, and patella at the same time, the bone can be rendered transparent or opaque.
  • the three-dimensional image of the skeletal structure obtained by medical image data processing can be image zoomed.
  • the femur image can be zoomed (reduced or enlarged, the same below).
  • the tibia image can be zoomed.
  • the images of these three types of bones can be zoomed.
  • two-dimensional images can also be zoomed in and out, eg, the cross-sectional, sagittal, and coronal images can be zoomed in or out at the same time.
  • the three-dimensional image of the femoral structure obtained by the medical image data processing can be rotated according to any axis, and the image can also be moved.
  • the femur can be rotated along any axis.
  • the tibia can be rotated along any axis.
  • the femur and tibia can be rotated according to any axis.
  • this bone structure can be rotated about any axis.
  • the flexible and diverse display modes more intuitively display the three-dimensional structure of the bone, so that the doctor (or other medical staff) can observe the image of the bone structure from multiple angles and at multiple levels.
  • Transparent means that the image transparency (transparency) is 0.3-0.75, and opaque means that the image transparency is 0.8-1.
  • the present disclosure realizes the identification and marking of key axes, key anatomical sites, and key anatomical parameters by identifying and marking steps.
  • the key axes include the femoral anatomical axis, the femoral mechanical axis, the tibial anatomical axis, and the tibial mechanical axis.
  • the critical axis further includes at least one of the transcondylar line, the posterior condylar line, the tibial-knee line, the femoral sagittal axis, and the femoral-knee line.
  • Key anatomical sites include the center points on different levels of the femoral medullary canal, the center points on different levels of the tibial medullary canal, the center point of the hip joint, the center point of the knee joint, the center point of the intercondylar spine, the center point of the ankle joint, and the It can include the concave point of the medial femoral condyle, the highest point of the lateral femoral condyle, the nadir of the medial and posterior femoral condyle, the medial low and lateral high points of the tibial plateau, the midpoint of the posterior cruciate ligament and the medial border of the tibial tubercle, and the nadir of the distal femur.
  • Key anatomical parameters include tibiofemoral angle, distal femoral angle. In some embodiments, the key anatomical parameter further includes the posterior femoral condyle angle.
  • critical axes are marked in a state where the transparency is opaque.
  • the critical axis after the critical axis is marked, it is observed whether at least one of the critical axis and the critical anatomical site is aligned, and at least one of the misaligned critical axis and the critical anatomical site is manually marked; independent display At least one of the femur and the tibia is extracted, the observation angle of the at least one of the femur and the tibia is adjusted by rotation, and then at least one of the key axis and the key anatomical site are manually marked.
  • the medical image data in the method provided by the present disclosure is CT scan data, and the data is data in dicom format. Based on total knee arthroplasty, CT scans the entire length of the lower extremity, from the hip to the ankle. Obviously, the medical image data in the present disclosure is the dicom data of the full length of the lower limb, and the range of the full length of the lower limb is from the hip joint to the ankle joint.
  • Femoral anatomical axis the centerline of the femoral diaphysis.
  • Femoral mechanical axis one end is located at the center of the hip joint, and the other end is located at the center of the knee joint of the femur (the apex of the intercondylar fossa of the femur).
  • Tibial anatomical axis the centerline of the tibial diaphysis.
  • Tibial mechanical axis one end is located at the center of the tibial knee joint (the center of the intercondylar spine), and the other end is located at the center of the tibial ankle joint (the midpoint of the line connecting the lateral cortex of the inner and outer malleolus).
  • Transcondylar line The line between the medial condyle of the femur and the highest point of the lateral condyle.
  • Posterior condyle line The line between the nadir of the posterior condyle of the femur.
  • Femoral Knee Line The line connecting the nadir of the distal femur.
  • Tibia-knee line The line connecting the medial low point and lateral high point of the tibial plateau.
  • Sagittal axis of the femur The line connecting the center of the posterior cruciate ligament insertion and the medial border of the tibial tubercle.
  • Tibiofemoral angle also known as mTFA: The angle formed by the mechanical axis of the femur and the mechanical axis of the tibia.
  • Distal Femoral Angle The angle between the mechanical axis of the femur and the anatomical axis of the femur.
  • Posterior femoral condyle angle also known as PCA: the angle between the projection line of the femoral transcondylar line and the line connecting the posterior condyle in the transverse section.
  • the three-dimensional prosthesis comprises a three-dimensional femoral prosthesis and a three-dimensional tibial prosthesis;
  • the analog matching includes:
  • 3D femoral prosthesis is implanted into the femur (referring to the 3D image of the femur), and 3D tibial prosthesis is implanted into the tibia (referring to the 3D image of the tibia); the visual 3D prosthesis can be visualized with color and bone structure differentiate;
  • Prosthesis selection select three-dimensional femoral prosthesis and three-dimensional tibial prosthesis, and select simulated surgical conditions;
  • Simulated osteotomy intelligent osteotomy according to the matching relationship between the 3D prosthesis and the bone, and observe the simulated matching effect;
  • the step of prosthesis selection includes at least one of the following steps of three-dimensional femoral prosthesis selection, three-dimensional tibial prosthesis selection, and selection of simulated surgical conditions:
  • selecting a three-dimensional femoral prosthesis includes selecting at least one of the femoral prosthesis type, the femoral prosthesis model (the model represents the size, the same below), and the three-dimensional spatial position of the femoral prosthesis;
  • the step of selecting a three-dimensional tibial prosthesis includes selecting at least one of the tibial prosthesis type, the tibial prosthesis model, and the three-dimensional spatial position of the tibial prosthesis; and at least one of the three-dimensional tibial pad type and model can also be selected.
  • a sort of. The types and sizes of femoral prosthesis stored and their sizes, tibial prosthesis types and their sizes, tibial pad types and their sizes mentioned are commercially available products (currently available prostheses for total knee arthroplasty). ) product type and model number.
  • the types of femoral prostheses include ATTUNE-PS, ATTUNE-CR, SIGMA-PS150, etc.
  • the models of ATTUNE-PS are 1, 2, 3, 3N, 4, 4N, 5, 5N, 6, 6N.
  • the models of SIGMA-PS150 are 1, 1.5, 2, 2.5, 3, 4, 4N, 5, and 6.
  • the types of tibial prosthesis include ATTUNE-FB, ATTUNE-RP, SIGMA-MBT, etc.
  • the models of ATTUNE-FB are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
  • the models of SIGMA-MBT are 1, 1.5, 2, 2.5, 3, 4, 5, 6, and 7.
  • the present disclosure is not exemplified one by one here.
  • selecting simulated surgical conditions includes selecting at least one of femoral surgical parameters and tibial surgical parameters;
  • the femoral surgical parameters include the amount of osteotomy of the distal femur, the amount of osteotomy of the posterior femoral condyle, the internal and external rotation angle , valgus angle and flexion angle of femoral prosthesis;
  • the tibial surgical parameters include tibial osteotomy, internal and external rotation angle, valgus angle and retroversion angle.
  • the simulated matching effect is observed under one or more of the following conditions:
  • the method further includes S3: a step of visualizing postoperative simulation, for simulating the postoperative limb movement situation of total knee arthroplasty.
  • the method (not shown in FIG. 1 ) further includes the step of exporting the simulation matching data that meets the surgical requirements to form a preoperative planning report, so as to facilitate preoperative deployment by the doctor.
  • the present disclosure provides a deep learning-based preoperative planning system for total knee arthroplasty.
  • the system includes:
  • the medical image data processing module 101 is configured to obtain a three-dimensional image of the skeletal structure through medical image data processing, identify and mark key axes, key anatomical sites and key anatomical parameters;
  • the skeletal structures include femur, tibia, fibula and patella;
  • the key anatomical sites include the center points on different levels of the femoral medullary canal, the center points on different levels of the tibial medullary canal, the center point of the hip joint, the center point of the knee joint, the center point of the intercondylar spine, and the center point of the ankle joint.
  • the key axes include femoral anatomical axis, femoral mechanical axis, tibial anatomical axis and tibial mechanical axis;
  • the key anatomical parameters include tibiofemoral angle and distal femoral angle;
  • the simulation matching module 201 is configured to simulate the matching of the three-dimensional prosthesis with the three-dimensional femur and the three-dimensional tibia, and observe the simulation matching effect in real time;
  • Display module 301 configured to display a three-dimensional image of the skeletal structure, key axes, key anatomical sites, key anatomical parameters and simulation matching effects.
  • the preoperative planning system for total knee arthroplasty based on deep learning realizes automatic segmentation of femur, tibia, fibula and patella based on deep learning, and improves segmentation efficiency and accuracy. Moreover, the system provided by the present disclosure realizes automatic identification and measurement of key axes and key anatomical parameters based on deep learning.
  • the system provided by the present disclosure is intelligent and efficient, the doctor has a short learning time, and can be mastered without long-term and large-scale operation training; moreover, the cost is low, and complex equipment is not required.
  • the system provided by the present disclosure can determine the size and position of the implanted prosthesis before surgery, and can virtually test whether the prosthesis meets the performance requirements, so as to optimize the articular surface reconstruction and the determination of the prosthesis position; provide technical support for doctors, Make surgery more accurate and safer; promote the development of surgery in the direction of intelligence, precision and minimally invasiveness.
  • the medical image data processing module 101 includes:
  • a three-dimensional reconstruction unit configured to obtain a three-dimensional image of the skeletal structure
  • an image segmentation unit configured to segment the femur, tibia, fibula and patella
  • An identification marking unit configured to identify and mark key axes, key anatomical sites and key anatomical parameters.
  • the deep learning-based preoperative planning system for total knee arthroplasty further includes a data import module 404 configured to import medical image data.
  • the deep learning-based preoperative planning system for total knee arthroplasty further includes a visual postoperative simulation module 501, which is configured to simulate postoperative limb movements of the total knee arthroplasty.
  • the deep learning-based preoperative planning system for total knee arthroplasty further includes an image combination module configured to arbitrarily combine skeletal structures.
  • the deep learning-based preoperative planning system for total knee arthroplasty further includes an image transparency switching module configured to switch the transparency of the bone structure.
  • the system further includes an image scaling module configured to scale at least one of a three-dimensional image and a two-dimensional image of the skeletal structure.
  • the deep learning-based preoperative planning system for total knee arthroplasty further includes an image rotation module configured to rotate the image according to any axis.
  • the deep learning-based preoperative planning system for total knee arthroplasty further includes an image moving module configured to move the image.
  • the deep learning-based preoperative planning system for total knee arthroplasty further includes a data exporting module configured to export the simulated matching data that meets the surgical requirements to form a preoperative planning report.
  • the present disclosure provides a device in a third aspect, comprising:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the deep learning-based preoperative planning for total knee arthroplasty provided in the first aspect of the present disclosure method.
  • the present disclosure provides, in a fourth aspect, a computer-readable storage medium having a computer program stored thereon,
  • the computer program when executed by the processor, implements the deep learning-based preoperative planning method for total knee arthroplasty provided in the first aspect of the present disclosure.
  • Import data use the data import module 404 to import the full-length dicom data of the lower extremity obtained by CT scan into the preoperative planning system for total knee arthroplasty based on deep learning.
  • Medical image data processing based on deep learning use the medical image data processing module 101 to perform this step, obtain three-dimensional images and two-dimensional images of the skeletal structure through medical image data processing, identify and mark key axes, key anatomical parameters of key anatomical sites;
  • the bone structure includes femur, tibia, fibula and patella;
  • key anatomical sites include center points on different levels of femoral medullary canal, center points on different levels of tibia medullary canal, hip joint center point, knee joint center point,
  • the center point of the intercondylar spine, the center point of the ankle joint also includes the concave point of the medial femoral condyle, the highest point of the lateral femoral condyle, the lowest point of the medial and posterior femoral condyles, the medial low and lateral high points of the tibial plateau, the midpoint of the posterior cruciate ligament and the tibia
  • this step includes:
  • the three-dimensional image reconstruction is performed according to the full-length dicom data of the lower limb to obtain a three-dimensional image of the lower limb skeleton, which can be displayed by the display module 301 .
  • the three-dimensional image reconstruction can be realized by using the existing method, therefore, the three-dimensional reconstruction unit can be an existing unit capable of realizing the three-dimensional image reconstruction.
  • Constructing a lower extremity medical image database Obtaining a lower extremity CT image dataset, manually marking the femur, tibia, fibula and patella regions; dividing the dataset into a training set and a testing set according to a ratio of 7:3; The surface image dicom data is converted into a picture in jpg format and saved, and the marked data is converted into a picture in png format and saved. Two-dimensional cross-sectional data are described here, but two-dimensional sagittal and two-dimensional coronal data can also be used.
  • a segmentation neural network model is established, the segmentation neural network model is Unet+PointRend, and the Unet neural network is used for rough segmentation, and the PointRend neural network is used for accurate segmentation; the Unet neural network includes 4 upsampling layers and 4 downsampling layers; each Each upsampling layer includes 1 upsampling operation layer and 2 convolutional layers.
  • the size of the convolution kernel in the convolutional layer is 3*3, and the size of the convolution kernel in the upsampling operation layer is 2*2.
  • each up-sampling layer The number of convolution kernels in each up-sampling layer is 512, 256, 256, 128; each down-sampling layer includes 2 convolution layers and 1 pooling layer, and the size of the convolution kernel in the convolution layer is 3*3, the size of the convolution kernel in the pooling layer is 2*2, and the number of convolution kernels in each convolution layer is 128, 256, 256, 512; there is a dropout after the last upsampling layer, the dropout rate is set to 0.5-0.7; all convolutional layers are followed by an activation layer, and the activation function used by the activation layer is the relu function.
  • Model training including:
  • Coarse segmentation training All the training sets are sent to the Unet neural network for training; during the training process, the background pixel value of the data label is set to 0, the femur is 1, the tibia is 2, the fibula is 3, and the patella is 4.
  • the training batch size The batch_size is 6, the learning rate is set to 1e-4, the optimizer uses the Adam optimizer, and the loss function used is DICE loss. According to the change of the loss function during the training process, the training batch size is adjusted;
  • Precise segmentation training After the rough segmentation is completed, it is sent to the PointRend neural network for precise segmentation; the initial process includes first using bilinear interpolation to upsample the prediction results of the rough segmentation, and then selecting a number of confidence levels of 0.5 in the feature map. The point is used as the point of preset reliability, and then the feature representation of multiple points is calculated by Bilinear interpolation and the labels to which the point belongs is predicted; the initial process is repeated until the confidence of the up-sampling prediction result reaches the target confidence. .
  • the above segmentation process can be implemented in the image segmentation unit, and the four types of skeletal structures segmented are unconnected and have clear edges.
  • Steps include:
  • Steps to identify key anatomical sites include:
  • the recognition neural network model is hourglass, and the details of the hourglass network will not be described in detail here.
  • Model training During training, input an orthographic image with a pixel value of 0-255 and label.txt, you can find the coordinates of the corresponding points through the name of each image; if you directly use the coordinates of the target point for learning, the neural network
  • the network needs to convert the spatial position into coordinates by itself, which is a difficult training method to learn. Therefore, these points are generated into a Gaussian map and supervised by a heatmap, that is, the output of the network is a feature map of the same size as the input.
  • the position of the point is 1, the other positions are 0, and the detection of multiple points outputs the feature maps of multiple channels; the network uses Adam optimization, the learning rate is 1e-5, the batch size is 4, and the loss function uses L2 regularization.
  • the training batch size can be adjusted according to the change of the loss function during the training process, and the coordinate values of the key points can be obtained.
  • the femoral anatomical axis it can be obtained by fitting the center points on different levels of the femoral medullary canal.
  • the tibial anatomical axis it can be obtained by fitting the center points on different levels of the tibial medullary canal.
  • the fitting method is any one of least squares, gradient descent, Gauss-Newton, and Column-Ma algorithm.
  • two endpoints can be used to obtain them.
  • the two end points of the femoral mechanical axis - the hip center point and the knee joint center point - have been identified, and the femoral mechanical axis can be determined by these two points.
  • the above-mentioned step of identifying and marking is realized in the identifying and marking unit.
  • the present disclosure does not limit the order of the three steps included in the medical image data processing steps.
  • the processing steps including the sequence are given, but should not be construed as a limitation of the processing sequence.
  • Figure 3 is a three-dimensional image of the four types of bones combined after segmentation. The developing effect is opaque (it can be switched to a transparent state). The angles of image a and image b are different, and different angles can be selected for observation. Since the present disclosure divides four types of bone structures, namely, femur, tibia, fibula, and patella, it is obvious that these four types of femoral structures can be combined arbitrarily.
  • Figure 4 is a three-dimensional image of the femur showing only the femur, and the developing effect is opaque (can be switched to a transparent state), wherein the angles of the a and b images are different.
  • Figure 5 is a three-dimensional image of the tibia showing only the tibia, and the developing effect is opaque (it can be switched to a transparent state), wherein the angles of a and b are different.
  • Figure 6 is an enlarged view of the tibial plateau of Figure 5b.
  • the 3D images in any combination can be enlarged or reduced.
  • it can be zoomed in or out.
  • it can be zoomed in or out.
  • it can be zoomed in or out.
  • you can zoom in or out.
  • it can be zoomed in or out.
  • Figure 7 shows a plot of the results marked with key axes, key anatomical sites, and key anatomical parameters. It can be observed whether the position of at least one of each key anatomical site and the key axis is correct, if not, at least one of the key anatomical site and the key axis can be manually marked (achieved by manually marking the key anatomical site).
  • the 3D prosthesis is simulated and matched with the 3D femur and 3D tibia, and the simulated matching effect is observed in real time; when the simulated matching effect meets the surgical requirements, the simulated matching is deemed to be completed.
  • the three-dimensional prosthesis includes a three-dimensional femoral prosthesis and a three-dimensional tibial prosthesis; this step can be performed as follows:
  • Implantation of prosthesis According to the previous segmentation identification and marking results, the 3D femoral prosthesis is automatically implanted into the femur, the 3D tibial prosthesis is implanted into the tibia, and the tibial pad is implanted into the prosthesis space;
  • Prosthesis selection select the type and model of the three-dimensional femoral prosthesis, and adjust its three-dimensional space position; select the type and model of the three-dimensional femoral prosthesis, and adjust its three-dimensional space position; the type and model of the tibial pad; select the simulated surgical conditions and simulate the surgery Conditions include femoral surgical parameters and tibial surgical parameters, femoral surgical parameters include distal femoral osteotomy, posterior femoral condyle osteotomy, internal and external rotation angle, valgus angle and femoral component flexion angle; tibial surgical parameters include tibial osteotomy , internal and external rotation angle, internal and external rotation angle and back inclination angle;
  • Simulated osteotomy intelligent osteotomy according to the matching relationship between the 3D prosthesis and the bone, and observe the simulated matching effect;
  • the simulated matching effect does not meet the surgical requirements, repeat the steps of the prosthesis selection and the simulated osteotomy: re-select at least one of the prosthesis type, model, and simulated surgical conditions, then perform a simulated osteotomy and observe the simulated osteotomy. Match the effect until the simulated matching effect meets the surgical requirements.
  • FIG. 8 shows the interface of simulating matching, the state is before osteotomy, and the developing effect is transparent (switchable).
  • Figure 9 shows the resulting image after osteotomy, with opaque (switchable) visualization.
  • the image rotation module can be used to adjust the image angle and observe in multiple directions.
  • postoperative simulation module to perform postoperative simulation 501, as shown in Figure 11, to observe the overall matching effect of the prosthesis and the femur and tibia after osteotomy, and observe the movement of the limbs after total knee arthroplasty (not shown in the figure) .
  • the data export module can also be used to export the preoperative planning data, which includes the type and size of the prosthesis (femur, tibia and tibial pad) in the process of visual simulation matching, simulated surgical conditions, Form a preoperative planning report.
  • FIG. 12 is a schematic structural diagram of a device provided by an embodiment of the present disclosure.
  • the device includes a memory 10 , a processor 20 , an input device 30 and an output device 40 .
  • the number of processors 20 in the device may be one or more, and one processor 20 is taken as an example in FIG. 12; the memory 10, the processor 20, the input device 30 and the output device 40 in the device may be connected by a bus or other means , the connection through the bus 50 is taken as an example in FIG. 12 .
  • the memory 10 can be configured to store software programs, computer-executable programs, and modules, such as programs corresponding to the deep learning-based preoperative planning method for total knee arthroplasty in the embodiments of the present disclosure Directive/Module.
  • the processor 20 executes various functional applications and data processing of the device by running the software programs, instructions and modules stored in the memory 10, ie, implements the above-mentioned preoperative planning method.
  • the memory 10 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the device, and the like. Additionally, memory 10 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some instances, memory 10 may further include memory located remotely from processor 20, which may be connected to the device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 30 may be configured to receive input numerical or character information, and to generate key signal input related to user settings and function control of the device.
  • the output device 40 may include a display device such as a display screen.
  • Embodiments of the present disclosure also provide a storage medium containing computer-executable instructions, when the computer-executable instructions are executed by a computer processor for performing a deep learning-based preoperative planning method for total knee arthroplasty,
  • the method includes:
  • the steps of medical image data processing based on deep learning through the medical image data processing, three-dimensional images of four types of skeletal structures are obtained, key axes, key anatomical sites, and key anatomical parameters are identified and marked;
  • the four types of skeletal structures include femur, tibia , fibula, and patella;
  • key anatomical sites include the center point on different levels of the femoral medullary canal, the center point on different levels of the tibial medullary canal, the center point of the hip joint, the center point of the knee joint, the center point of the intercondylar spine, the center point of the ankle joint Joint center point;
  • key axes include femoral anatomical axis, femoral mechanical axis, tibial anatomical axis and tibial mechanical axis;
  • the key anatomical parameters include tibiofemoral angle and distal femoral angle; see the first aspect for more specific methods;
  • the 3D prosthesis is simulated and matched with the 3D femur and the 3D tibia, and the simulated matching effect is observed in real time; when the simulated matching effect meets the surgical requirements, the simulated matching is deemed to be completed.
  • the first aspect see the first aspect.
  • a storage medium containing computer-executable instructions provided by the present disclosure is not limited to the above-mentioned method operations, and can also perform any deep learning-based total knee arthroplasty of the present disclosure. Relevant operations in preoperative planning methods.
  • the present disclosure can be implemented by means of software and necessary general-purpose hardware, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner .
  • the technical solutions of the present disclosure essentially or the parts that make contributions to the prior art can be embodied in the form of software products, and the computer software products can be stored in a computer-readable storage medium, such as a computer floppy disk , read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer , server, or network device, etc.) to execute the methods described in various embodiments of the present disclosure.
  • a computer-readable storage medium such as a computer floppy disk , read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Surgery (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Robotics (AREA)
  • Computer Graphics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Prostheses (AREA)
  • Image Analysis (AREA)

Abstract

一种基于深度学习的全膝关节置换术的术前规划方法、系统和介质。方法包括:基于深度学习的医学图像数据处理的步骤,通过医学图像数据处理获得骨骼结构的三维影像、识别标记出关键轴线、关键解剖位点和关键解剖参数;骨骼结构包括股骨、胫骨、腓骨和髌骨;关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴;解剖参数包括胫股角和远端股骨角;可视化模拟匹配的步骤,将三维假体模型与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;当模拟匹配效果符合手术要求时,视为完成模拟匹配。该方法和系统基于深度学习实现了骨块的自动分割、全膝关节置换术中至少一种的关键轴线、关键解剖位点及关键解剖参数的识别测量。

Description

基于深度学习的全膝关节置换术的术前规划方法、系统和介质
本申请要求于2020年8月22日提交中国专利局,申请号为2020108529413,发明名称为“一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及医学技术领域,尤其涉及一种基于深度学习的全膝关节置换术的术前规划方法、系统和介质。
背景技术
膝关节是全身主要的承重关节,长期负重且运动量大,属于容易受伤的部位之一,再加上当前社会人口老龄化不断加剧,这些因素使得膝关节疾病的发生率较高。目前,国内外针对骨关节CT图像分割方法大部分需要在每一张CT图像中进行手动定位或手动分割,费时费力,且效率低。
全膝关节置换术(Total Knee Arthroplasty,TKA)是一种较成熟的治疗膝关节疾病的技术,能够有效恢复膝关节功能,极大地提高患者的生活质量。术前规划为医生提供技术支持,便于医生制定手术方案、观察下肢力线。如何更快、更精准地实现术前规划是具有现实意义的研究方向。
发明内容
本公开的目的是提供一种基于深度学习的全膝关节置换术的术前规划方法、系统和介质,以实现骨块的自动分割或全膝关节置换术中关键轴线、关键解剖位点及关键解剖参数的识别测量。
为了实现上述目的,本公开的第一方面提供了一种基于深度学习的全膝关节置换术的术前规划方法,所述方法基于患者下肢医学图像数据,所述方法包括:
基于深度学习的医学图像数据处理的步骤,通过所述医学图像数据处理获得骨骼结构的三维影像、识别标记出关键轴线、关键解剖位点和关键解剖参数;所述骨骼结构包括股骨、胫骨、腓骨和髌骨;所述关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴;所述关键解剖参数包括胫股角和远端股骨角;
可视化模拟匹配的步骤,将三维假体与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;当模拟匹配效果符合手术要求时,视为完成模拟匹配。
可选地,所述医学图像数据处理的步骤包括骨骼三维影像重建的步骤;图像分割的步骤;识别标记关键轴线、关键解剖位点和关键解剖参数的步骤。
可选地,所述医学图像数据处理的步骤包括骨骼三维影像重建的步骤;图像分割的步骤;基于深度学习的识别标记关键轴线、关键解剖位点和关键解剖参数的步骤。
可选地,所述医学图像数据处理的步骤包括骨骼三维影像重建的步骤;基于深度学习的图像分割的步骤;基于深度学习的识别标记关键轴线、关键解剖位点和关键解剖参数的步骤。
可选地,所述图像分割基于深度学习进行,所述图像分割的步骤包括:
构建下肢医学图像数据库:获取下肢医学图像数据集,手动标注出股骨、胫骨、腓骨和 髌骨区域;将所述数据集划分为训练集和测试集;将未标注前的医学图像数据转换成第一格式的图片并保存,将标注后的数据转换成第二格式的图片并保存;
建立分割神经网络模型;所述分割神经网络模型包括粗分割神经网络和精确分割申请网络;所述粗分割神经网络作为主干网络进行粗分割,所述精确分割神经网络基于所述粗分割进行精确分割;所述粗分割神经网络选自FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask、DeepLabV1、DeepLabV2、DeepLabV3中的至少一种;所述精确分割神经网络为EfficientDet、SimCLR、PointRend中的至少一种;
模型训练:利用训练集对分割神经网络模型进行训练,并利用测试集进行测试;和
利用训练好的分割神经网络模型进行分割。
可选地,所述粗分割神经网络采用Unet神经网络;
所述Unet神经网络包括n个上采样层和n个下采样层;
每个上采样层包括上采样操作层和卷积层;
每个下采样层包括卷积层和池化层。
可选地,n的取值可以为2-8,还可以为3-6,还可以为4-5。
可选地,每个上采样层包括1个上采样操作层和2个卷积层,其中的卷积层中的卷积核大小为3*3,上采样操作层中的卷积核大小为2*2,每个上采样层中的卷积核个数为512,256,256,128。
可选地,每个下采样层包括2个卷积层和1个池化层,其中的卷积层中的卷积核大小为3*3,池化层中的卷积核大小为2*2,每个卷积层中的卷积核的个数为128,256,256,512。
可选地,将所述数据集按照7:3的比例划分为训练集和测试集。
可选地,所述方法还包括执行以下至少一种操作:
最后一次上采样结束后设有一个dropout层,dropout率设置为0.5-0.7;所有的卷积层后面设有激活层,激活层使用的激活函数为relu函数。
可选地,所述训练按照如下方法进行:
粗分割:将训练集送入粗分割神经网络进行训练;训练过程中,数据标签的背景像素值设置为0,股骨为1,胫骨为2,腓骨为3,髌骨为4,训练的批尺寸batch_size为6,学习率设置为1e-4,优化器使用Adam优化器,使用的损失函数为DICE loss;可选地,根据训练过程中损失函数的变化,调整训练的批尺寸;
精确分割:送入精确分割神经网络进行精确分割;初始过程包括,先使用双线性插值上采样粗分割的预测结果,再在特征图中选定多个置信度为预设置信度的点,然后通过双线性插值Bilinear计算多个点的特征表示并且预测点所属的标签labels;重复所述初始过程,直到上采样到预测结果的置信度达到目标置信度。
可选地,选定置信度为0.5的点作为预设置信度的点。
可选地,所述下肢医学图像数据为CT扫描数据。
可选地,识别标记关键轴线、关键解剖位点和关键解剖参数基于深度学习进行,该步骤包括:
识别关键解剖位点;利用MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种识别神经网络模型识别关键解剖位点;
利用关键解剖位点获得关键轴线;和
测量关键解剖参数。
可选地,所述识别关键解剖位点的步骤包括:
构建数据库:获取下肢医学图像数据集,手动标定关键解剖位点;将所述数据集划分为训练集和测试集,可以按照7:3的比例划分。
建立识别神经网络模型;
模型训练:利用训练集对识别神经网络模型进行训练,并利用测试集进行测试;
利用训练好的识别神经网络模型进行关键解剖位点的识别。
可选地,所述利用关键解剖位点获得关键轴线包括:
对于股骨解剖轴,通过拟合股骨髓腔的不同层面上的中心点而得到;
对于胫骨解剖轴,通过拟合胫骨髓腔的不同层面上的中心点而得到;
其中,所述拟合的方法为最小二乘法、梯度下降、高斯牛顿、列-马算法中的任一种;
对于除所述股骨解剖轴和胫骨解剖轴之外的其他关键轴线,利用确定的两个端点而得到。
可选地,所述三维假体包括三维股骨假体和三维胫骨假体;还包括胫骨垫;和
所述模拟匹配包括:
假体植入:将三维股骨假体植入股骨,将三维胫骨假体植入胫骨;还包括将胫骨垫植入假体间隙;
假体选择:选择三维股骨假体和三维胫骨假体,选择模拟手术条件;
模拟截骨:根据三维假体与骨骼的匹配关系智能截骨,观察三维假体与骨骼的模拟匹配效果;
若是模拟匹配效果不符合手术需求,则重复所述假体选择和所述模拟截骨的步骤,直至模拟匹配效果符合手术要求。
可选地,所述假体选择的步骤包括如下三维股骨假体选择的步骤、三维胫骨假体选择的步骤、模拟手术条件选择的步骤中的至少一种:
三维股骨假体选择的步骤:选择三维股骨假体包括选择股骨假体类型、股骨假体型号、股骨假体的三维空间位置中的至少一种;
三维胫骨假体选择的步骤:选择三维胫骨假体包括选择胫骨假体类型、胫骨假体型号、三维空间位置中的至少一种;
模拟手术条件选择的步骤:选择模拟手术条件包括选择股骨手术参数、选择胫骨手术参数中的至少一种;所述股骨手术参数包括股骨远端截骨量、股骨后髁截骨量、内外旋角、内外翻角和股骨假体屈曲角;所述胫骨手术参数包括胫骨截骨量、内外旋角、内外翻角和后倾角。
可选地,对至少一种骨骼结构进行显示,并执行以下操作方式中至少一种:
透明度的切换、图像缩放、图像旋转、图像移动;
所述透明度包括透明和不透明两种。
可选地,在如下一个或多个状态下观察模拟匹配效果:
(a)截骨状态或非截骨状态;
(b)骨骼透明状态或不透明状态;
(c)腓骨显示或不显示状态。
可选地,所述关键解剖位点还包括股骨内髁凹点、股骨外髁最高点、股骨内外后髁最低点、胫骨平台内侧低点和外侧高点、后交叉韧带中点和胫骨结节内侧缘点、股骨远端最低点中的至少一种;所述关键轴线还包括通髁线、后髁连线、胫骨膝关节线、股骨矢状轴、股骨 膝关节线中的至少一种;所述关键解剖参数还包括股骨后髁角。
可选地,在透明度为不透明的状态下标记出关键轴线;
可选地,通过所述医学图像数据处理获得骨骼结构的三维影像和二维影像;所述二维影像包括横断面影像、矢状面影像和冠状面影像;横断面影像、矢状面影像和冠状面影像三轴联动。
可选地,在标记关键轴线后,观察关键轴线、关键解剖位点中的至少一个是否对位,并将不对位的关键轴线、关键解剖位点中的至少一个进行手动标记;独立显示出股骨、胫骨中至少一种,调整股骨、胫骨中至少一种的观察角度,然后再进行关键轴线、关键解剖位点中的至少一个的手动标记。
可选地,所述方法还包括:
可视化术后模拟的步骤,以模拟全膝关节置换术的术后肢体运动情况;
将符合手术需求的模拟匹配数据导出,形成术前规划报告的步骤,以便于医生进行术前部署。
本公开的第二方面提供了一种基于深度学习的全膝关节置换的术前规划系统,所述系统包括:
医学图像数据处理模块,被配置为通过医学图像数据处理获得骨骼结构的三维影像、识别标记出关键轴线、关键解剖位点和关键解剖参数;所述骨骼结构包括股骨、胫骨、腓骨和髌骨;所述关键解剖位点包括股骨髓腔的不同层面上的中心点、胫骨髓腔的不同层面上的中心点、髋关节中心点、膝关节中心点、髁间棘的中心点、踝关节中心点;所述关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴;所述关键解剖参数包括胫股角和远端股骨角;
模拟匹配模块,被配置为将三维假体与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;和
显示模块:被配置为显示骨骼结构的三维影像、关键轴线、关键解剖位点、关键解剖参数和模拟匹配效果。
可选地,所述医学图像数据处理模块包括:
三维重建单元,被配置为获得骨骼结构的三维影像;
图像分割单元,被配置为分割出股骨、胫骨、腓骨和髌骨;
识别标记单元,被配置为识别标记出关键轴线、关键解剖位点和关键解剖参数。
可选地,所述系统还包括:
图像组合模块,被配置为将骨骼结构任意组合;
图像透明度切换模块,被配置为切换骨骼结构的透明度;
图像缩放模块,被配置为缩放骨骼结构的三维影像、二维影像中的至少一种;
图像旋转模块,被配置为将图像按照任意轴进行旋转;
图像移动模块,被配置为将图像进行移动。
可选地,所述系统还包括:
数据导入模块;
术后模拟模块;数据导出模块。
本公开的第三方面提供了一种设备,包括:
一个或多个处理器;
存储装置,被配置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现第一方面任一项所述的基于深度学习的全膝关节置换术的术前规划方法。
本公开的第四方面提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现第一方面任一项所述的基于深度学习的全膝关节置换术的术前规划方法。
本公开的上述技术方案具有如下优点:
本公开提供的基于深度学习的全膝关节置换术的术前规划方法和系统基于深度学习实现了股骨、胫骨、腓骨和髌骨的自动分割。本公开提高了分割效率以及准确率。本公开提供的方法和系统基于深度学习实现了关键轴线和关键解剖参数自动识别和测量。
本公开提供的基于深度学习的全膝关节置换术的术前规划系统智能高效,医生学习时间短,无需经过长时间、大体量手术的培训即可掌握;而且,成本较低,无需复杂设备。
利用本公开提供的基于深度学习的全膝关节置换术的术前规划方法和系统可以在术前确定植入假体的尺寸和位置,并且能虚拟测试假体是否达到性能要求,以便最优化关节面重建和假体位置的确定;为医生提供技术支持,使外科手术更准确、更安全;促进外科手术向智能化、精准化、微创化方向发展。
附图说明
为了更清楚地说明本公开具体实施方式中的技术方案,下面将对具体实施方式描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示意性地示出了本公开提供的基于深度学习的全膝关节置换术的术前规划方法的流程图;
图2示意性地示出了本公开提供的基于深度学习的全膝关节置换术的术前规划系统的框图;
图3是分割后四类骨骼结构组合显示的三维影像,a和b分别为不同角度下的三维影像;
图4是只显示股骨时的股骨三维影像,a和b分别为不同角度下的三维影像;
图5是只显示胫骨时的胫骨三维影像,a和b分别为不同角度下的三维影像;
图6是胫骨平台放大后的三维影像;
图7是标记关键轴线后的结果图;
图8是截骨前的模拟匹配的界面(显像效果为透明);
图9是截骨后的模拟匹配的界面(显像效果为不透明);
图10是在不同角度下的图像,a为股骨,b为胫骨;
图11是术后模拟的结果图;
图12示意性地示出了本公开提供的设备的结构图。
图中:101:医学图像数据处理模块;201:模拟匹配模块;301:显示模块;401:数据导入模块;501:可视化术后模拟模块。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚,下面将结合本公开实施例,对本公开的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是 全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开在第一方面提供了一种基于深度学习的全膝关节置换术的术前规划方法,所述方法基于患者下肢医学图像数据,参考图1,本公开提供的方法包括如下步骤:
S1、基于深度学习的医学图像数据处理的步骤,通过所述医学图像数据处理获得骨骼结构的三维影像、识别标记出关键轴线、关键解剖位点和关键解剖参数;所述骨骼结构包括股骨、胫骨、腓骨和髌骨;所述关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴;所述关键解剖位点包括股骨髓腔的不同层面上的中心点、胫骨髓腔的不同层面上的中心点、髋关节中心点、膝关节中心点、髁间棘的中心点和踝关节中心点;所述关键解剖参数包括胫股角和远端股骨角;
S2、可视化模拟匹配的步骤,将三维假体与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;当模拟匹配效果符合手术要求时,视为完成模拟匹配。
本公开提供的基于深度学习的全膝关节置换术的术前规划方法基于深度学习实现了股骨、胫骨、腓骨和髌骨的自动分割,提高了分割效率以及准确率。并且,本公开提供的方法基于深度学习实现了关键轴线和关键解剖参数自动识别和测量。
本公开提供的方法智能高效,医生学习时间短,无需经过长时间、大体量手术的培训即可掌握;而且,成本较低,无需复杂设备。
利用本公开提供的方法可以在术前确定植入假体的尺寸和位置,并且能虚拟测试假体是否达到性能要求,以便最优化关节面重建和假体位置的确定;为医生提供技术支持,使外科手术更准确、更安全;促进外科手术向智能化、精准化、微创化方向发展。
关于S1:
继续参考图1,所述医学图像数据处理的步骤包括骨骼三维影像重建的步骤;图像分割的步骤;识别标记出关键轴线、关键解剖位点和关键解剖参数的步骤。本公开对医学图像数据处理步骤所包括的三个步骤没有顺序上的限定。在获得患者的医学图像数据后,可以先进行三维影像重建,再进行分割、识别标记,也可以先进行分割,再进行三维影像重建、识别标记,本公开在此对可以实现的顺序不一一列举说明。
通过三维影像重建获得股骨、胫骨、腓骨和髌骨这四类骨骼的三维影像。无需说明的是,若是在分割之前进行三维影像重建,则获得的三维影像中的骨骼结构是存在连结的。通过图像分割至少能够获得股骨、胫骨、腓骨和髌骨这四类骨骼结构,分割出的这四类骨骼结构无连结。通过识别标记的步骤至少识别标记出股骨和胫骨上的股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴,至少获得胫股角和远端股骨角这些关键解剖参数。
本公开在图像分割的步骤和/或识别标记的步骤可通过深度学习技术实现AI图像分割和/或AI识别标记关键轴线、关键解剖位点和关键解剖参数。
关于图像分割:
在一些实施方式中,所述图像分割基于深度学习进行,所述图像分割的步骤包括:
构建下肢医学图像数据库:获取下肢医学图像数据集,手动标注出股骨、胫骨、腓骨和髌骨区域;将所述数据集划分为训练集和测试集,可以按照7:3的比例进行划分;将未标注前的医学图像数据(如二维横断面影像dicom格式的数据)转换成第一格式(如jpg格式)的图片并保存,将标注后的数据转换成第二格式(如png格式)的图片并保存;第一格式和第二格式不相同;
建立分割神经网络模型;
模型训练:利用训练集对分割神经网络模型进行训练,并利用测试集进行测试;和
利用训练好的分割神经网络模型进行分割。
关于分割神经网络模型:
在一些实施方式中,分割神经网络模型包括级联的粗分割神经网络和精确分割申请网络;所述粗分割神经网络作为主干网络进行粗分割,所述精确分割神经网络基于所述粗分割进行精确分割;所述粗分割神经网络选自FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask中的至少一种;所述精确分割神经网络为EfficientDet、SimCLR、PointRend中的至少一种。
以所述分割神经网络模型为Unet+PointRend为例,利用Unet神经网络进行粗分割,利用PointRend神经网络进行精确分割。所述Unet神经网络包括n个上采样层和n个下采样层;每个上采样层包括上采样操作层和卷积层;每个下采样层包括卷积层和池化层。n的取值可以为2-8,还可以为3-6,还可以为4-5。每个上采样层包括1个上采样操作层和2个卷积层,其中的卷积层中的卷积核大小为3*3,上采样操作层中的卷积核大小为2*2,每个上采样层中的卷积核个数为512,256,256,128。每个下采样层包括2个卷积层和1个池化层,其中的卷积层中的卷积核大小为3*3,池化层中的卷积核大小为2*2,每个卷积层中的卷积核的个数为128,256,256,512。
在一些实施方式中,所述方法还包括执行以下至少一种操作:
最后一次上采样结束后设有一个dropout层,dropout率设置为0.5-0.7;
所有的卷积层后面设有激活层,激活层使用的激活函数为relu函数。
关于模型训练:
所述训练按照如下方法进行:
粗分割:训练过程中,将训练集全部送入Unet神经网络进行训练;训练过程中,数据标签的背景像素值设置为0,股骨为1,胫骨为2,腓骨为3,髌骨为4,训练的批尺寸batch_size为6,学习率设置为1e-4,优化器使用Adam优化器,使用的损失函数为DICE loss,可以根据训练过程中损失函数的变化,调整训练的批尺寸;
精确分割:完成粗分割后,送入PointRend神经网络进行精确分割;初始过程包括,先使用双线性插值上采样粗分割的预测结果,再在特征图中选定多个置信度为预设置信度的点,然后通过双线性插值Bilinear计算多个点的特征表示并且预测点所属的标签labels;重复所述初始过程,直到上采样到预测结果的置信度达到目标置信度。
在一些实施方式中,选定置信度为0.5的点作为预设置信度的点。
关于基于深度学习的识别标记:
在一些实施方式中,所述基于深度学习的识别标记的步骤包括:
识别关键解剖位点;
利用关键解剖位点获得关键轴线;
测量关键解剖参数。
关于识别关键解剖位点:
本公开需要识别的关键解剖位点包括股骨髓腔的不同层面上的中心点、胫骨髓腔的不同层面上的中心点、髋关节中心点、膝关节中心点、髁间棘的中心点和踝关节中心点,在一些实施方式中,还包括股骨内髁凹点、股骨外髁最高点、股骨内外后髁最低点、胫骨平台内侧低点和外侧高点、后交叉韧带中点和胫骨结节内侧缘点、股骨远端最低点等。
识别关键解剖位点的步骤包括:
构建数据库:获取下肢医学图像数据集,手动标定关键解剖位点;将所述数据集划分为训练集和测试集,可以按照7:3的比例划分。
建立关键点识别神经网络模型:所述识别神经网络模型为MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种。
以hourglass为例,其网络细节包括:
首先Conv层和Max Pooling层用于将特征的分辨率进行缩放;
每一个Max Pooling(降采样)处,网络进行分叉,上下两路在不同尺度空间进行卷积操作提取特征;
得到最低分辨率特征后,网络开始进行upsampling,并逐渐结合不同尺度的特征信息;对较低分辨率可以采用最近邻上采样方式,将两个不同的特征集进行逐元素相加;
整个hourglass是对称的,获取低分辨率特征过程中每有一个网络层,则在上采样的过程中相应低就会有一个对应网络层;
得到hourglass网络模块输出后,再采用两个连续的1×1Conv层进行处理,得到最终的网络输出;输出为heatmaps的集合,每一个heatmap表征了关键点在每个像素点存在的概率。
模型训练:利用训练集对识别神经网络模型进行训练,并利用测试集进行测试。
以hourglass为例,在进行训练时,输入像素值为0-255的正投影图像和label.txt,可以通过每张图片的名称找到互相对应的点的坐标;若直接用目标点的坐标进行学习,神经网络需要自行将空间位置转换为坐标,是一种比较难学习的训练方式,所以将这些点生成高斯图,用heatmap去监督,即网络的输出是一个与输入大小相同尺寸的特征图,在检测点的位置为1,其他位置为0,多个点的检测就输出多个通道的特征图;网络使用Adam优化,学习率为1e-5,批尺寸batch_size为4,损失函数使用L2正则化,可以根据训练过程中损失函数的变化,调整训练的批尺寸,得到关键点位的坐标值。
利用训练好的识别神经网络模型进行关键解剖位点的识别。
关于利用关键解剖位点获得关键轴线:
对于股骨解剖轴,可以通过拟合股骨髓腔的不同层面上的中心点而得到。同样地,对于胫骨解剖轴,可以通过拟合胫骨髓腔的不同层面上的中心点而得到。拟合的方法可以为最小二乘法、梯度下降、高斯牛顿、列-马算法中的任一种。
对于其它种类的关键轴线,可以利用确定的两个端点而得到。如,股骨机械轴的两个端点-髋关节中心点和膝关节中心点-已被识别出来,可以通过这两点确定股骨机械轴线。
测量关键解剖参数:
在该步骤可以自动测量的关键解剖参数包括胫股角、远端股骨角,还可以自动测量出股骨后髁角。
本公开通过所述医学图像数据处理不仅可以获得骨骼结构的三维影像,还可以获得二维影像;所述二维影像包括横断面影像、矢状面影像和冠状面影像,并且横断面影像、矢状面影像和冠状面影像可以三轴联动。
对于本公开来说,通过医学图像数据处理获得的骨骼结构的三维影像可以进行任意组合,从而实现骨骼结构灵活多样的显示方式。显示的情形包括如下任一种:只显示股骨;只显示胫骨;只显示腓骨;只显示髌骨;同时显示股骨和胫骨;同时显示股骨和腓骨;同时显示股骨和髌骨;同时显示胫骨和腓骨;同时显示胫骨和髌骨;同时显示腓骨和髌骨;同时显示股骨、胫骨和腓骨;同时显示股骨、胫骨和髌骨;同时显示股骨、腓骨和髌骨;同时显示胫骨、 腓骨和髌骨;同时显示股骨、胫骨、腓骨和髌骨。
对于本公开来说,通过医学图像数据处理获得的骨骼结构的三维影像可以进行透明度的变换,使得影像表现出多样的显像效果。具体来说,透明度可以在透明和不透明之间进行切换。例如,只显示股骨时,股骨的显像效果可以选择透明,也可以选择不透明。例如,只显示胫骨时,胫骨的显像效果可以选择透明,也可以选择不透明。例如,同时显示股骨和胫骨时,两类骨骼的显像效果可以选择透明,也可以选择不透明。例如,同时显示股骨和腓骨时,两类骨骼的显像效果可以选择透明,也可以选择不透明。例如,同时显示股骨、胫骨和腓骨时,三类骨骼的显像效果可以选择透明,也可以选择不透明。例如,同时显示股骨、胫骨、腓骨和髌骨时,骨骼的显像效果可以选择透明,也可以选择不透明。
对于本公开来说,通过医学图像数据处理获得的骨骼结构的三维影像可以进行图像缩放。如,只显示股骨时,可以进行股骨图像的缩放(缩小或放大,以下同)。如,只显示胫骨时,可以进行胫骨图像的缩放。如,同时显示股骨和胫骨时,可以进行股骨和胫骨图像的缩放。如,同时显示股骨、胫骨和腓骨时,可以进行这三类骨骼图像的缩放。如,同时显示股骨、胫骨、腓骨和髌骨时,可以进行这骨骼图像的缩放。在一些实施方式中,二维影像(包括横断面影像、矢状面影像和冠状面影像)也可以进行图像的缩放,如,横断面影像、矢状面影像和冠状面影像同时放大或缩小。
对于本公开来说,通过医学图像数据处理获得的股骨结构的三维影像可以按照任意轴进行旋转,还可以进行图像移动。如,只显示股骨时,可以将股骨按照任意轴进行旋转。如,只显示胫骨时,可以将胫骨按照任意轴进行旋转。如,同时显示股骨和胫骨时,可以将股骨和胫骨按照任意轴进行旋转。如,同时显示股骨、胫骨和腓骨时,可以将这三类骨骼按照任意轴进行旋转。如,同时显示股骨、胫骨、腓骨和髌骨时,可以将这骨骼结构按照任意轴进行旋转。
总的来说,灵活多样的显示方式更加直观地显示了骨骼的立体结构,使得医生(或其它医护人员)可以多角度、多层次地观察骨骼结构的影像。透明的含义为图像透明度(transparency)为0.3-0.75,不透明的含义为图像透明度为0.8-1。
本公开通过识别标记步骤实现关键轴线、关键解剖位点、关键解剖参数的识别标记。关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴、胫骨机械轴。在一些实施方式中,关键轴线还包括通髁线、后髁连线、胫骨膝关节线、股骨矢状轴、股骨膝关节线中的至少一种。关键解剖位点包括股骨髓腔的不同层面上的中心点、胫骨髓腔的不同层面上的中心点、髋关节中心点、膝关节中心点、髁间棘的中心点、踝关节中心点,还可以包括股骨内髁凹点、股骨外髁最高点、股骨内外后髁最低点、胫骨平台内侧低点和外侧高点、后交叉韧带中点和胫骨结节内侧缘点、股骨远端最低点。关键解剖参数包括胫股角、远端股骨角。在一些实施方式中,所述关键解剖参数还包括股骨后髁角。
在一些实施方式中,在透明度为不透明的状态下标记出关键轴线。
在一些实施方式中,在标记关键轴线后,观察关键轴线、关键解剖位点中的至少一个是否对位,并将不对位的关键轴线、关键解剖位点中的至少一个进行手动标记;独立显示出股骨、胫骨中至少一种,通过旋转调整股骨、胫骨中至少一种的观察角度,然后再进行关键轴线、关键解剖位点中的至少一个的手动标记。
本公开提供的方法中的医学图像数据为CT扫描数据,该数据为dicom格式的数据。基于全膝关节置换术,CT的扫描范围为下肢全长,即:髋关节至踝关节。显然地,本公开中的医学图像数据为下肢全长dicom数据,下肢全长的范围为髋关节至踝关节。
本公开中提及的术语均为骨科常规术语,各个术语解释如下:
股骨解剖轴:股骨骨干中心线。
股骨机械轴:一端点位于髋关节中心,另一端点位于股骨的膝关节中心点(股骨髁间窝顶点)。
胫骨解剖轴:胫骨骨干中心线。
胫骨机械轴:一端点位于胫骨膝关节中心(髁间棘的中心),另一端点位于胫骨踝关节中心(内外踝外侧骨皮质连线的中点)。
通髁线:股骨内髁凹与外髁最高点之间的连线。
后髁连线:股骨内外后髁最低点之间的连线。
股骨膝关节线:股骨远端最低点的连线。
胫骨膝关节线:胫骨平台内侧低点和外侧高点的连线。
股骨矢状轴:后交叉韧带止点中心与胫骨结节内缘的连线。
胫股角(又称mTFA):股骨机械轴和胫骨机械轴形成的夹角。
远端股骨角:股骨机械轴与股骨解剖轴之间的夹角。
股骨后髁角(又称PCA):股骨通髁线与后髁连线在横断面的投影线之间的夹角。
关于S2:
在一些实施方式中,所述三维假体包括三维股骨假体和三维胫骨假体;和
所述模拟匹配包括:
假体植入:将三维股骨假体植入股骨(指的是股骨三维影像),将三维胫骨假体植入胫骨(指的是胫骨三维影像);可以将可视化三维假体用颜色与骨骼结构区分开来;
假体选择:选择三维股骨假体和三维胫骨假体,选择模拟手术条件;
模拟截骨:根据三维假体与骨骼的匹配关系智能截骨,观察模拟匹配效果;
若是模拟匹配效果不符合手术需求,则重复假体选择和模拟截骨的步骤,直至模拟匹配效果符合手术要求。
假体选择的步骤包括如下三维股骨假体选择的步骤、三维胫骨假体选择的步骤、模拟手术条件选择的步骤中的至少一种:
三维股骨假体选择的步骤:选择三维股骨假体包括选择股骨假体类型、股骨假体型号(型号代表大小,以下同)、股骨假体的三维空间位置中的至少一种;
三维胫骨假体选择的步骤:选择三维胫骨假体包括选择胫骨假体类型、胫骨假体型号、胫骨假体的三维空间位置中的至少一种;还可以选择三维胫骨垫类型、型号中的至少一种。存储的股骨假体类型以及其型号、胫骨假体类型以及其型号、胫骨垫类型以及其型号中所提及的类型和型号为市售产品(目前市场上已有的全膝关节置换用假体)的商品类型和型号。如,股骨假体类型有ATTUNE-PS、ATTUNE-CR、SIGMA-PS150等。如,ATTUNE-PS的型号有1、2、3、3N、4、4N、5、5N、6、6N。如,SIGMA-PS150的型号有1、1.5、2、2.5、3、4、4N、5、6。如,胫骨假体类型有ATTUNE-FB、ATTUNE-RP、SIGMA-MBT等。如,ATTUNE-FB的型号有1、2、3、4、5、6、7、8、9、10。如,SIGMA-MBT的型号有1、1.5、2、2.5、3、4、5、6、7。本公开在此不一一举例说明。
模拟手术条件选择的步骤:选择模拟手术条件包括选择股骨手术参数、选择胫骨手术参数中的至少一种;所述股骨手术参数包括股骨远端截骨量、股骨后髁截骨量、内外旋角、内外翻角和股骨假体屈曲角;所述胫骨手术参数包括胫骨截骨量、内外旋角、内外翻角和后倾角。
在一些实施方式中,在如下一个或多个状态下观察模拟匹配效果:
(a)截骨状态或非截骨状态;
(b)骨骼透明状态或不透明状态;
(c)腓骨显示或不显示状态。
关于S3:
在一些实施方式中,所述方法还包括S3:可视化术后模拟的步骤,用于模拟全膝关节置换术的术后肢体运动情况。
在一些实施方式中,所述方法(图1未示出)还包括将符合手术需求的模拟匹配数据导出,形成术前规划报告的步骤,便于医生进行术前部署。
本公开在第二方面提供了一种基于深度学习的全膝关节置换术的术前规划系统,参考图2,系统包括:
医学图像数据处理模块101,被配置为通过医学图像数据处理获得骨骼结构的三维影像、识别标记出关键轴线、关键解剖位点和关键解剖参数;所述骨骼结构包括股骨、胫骨、腓骨和髌骨;所述关键解剖位点包括股骨髓腔的不同层面上的中心点、胫骨髓腔的不同层面上的中心点、髋关节中心点、膝关节中心点、髁间棘的中心点和踝关节中心点;所述关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴;所述关键解剖参数包括胫股角和远端股骨角;
模拟匹配模块201,被配置为将三维假体与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;和
显示模块301:被配置为显示骨骼结构的三维影像、关键轴线、关键解剖位点、关键解剖参数和模拟匹配效果。
本公开提供的基于深度学习的全膝关节置换术的术前规划系统基于深度学习实现了股骨、胫骨、腓骨和髌骨的自动分割,提高了分割效率以及准确率。并且,本公开提供的系统基于深度学习实现了关键轴线和关键解剖参数自动识别和测量。
本公开提供的系统智能高效,医生学习时间短,无需经过长时间、大体量手术的培训即可掌握;而且,成本较低,无需复杂设备。
利用本公开提供的系统可以在术前确定植入假体的尺寸和位置,并且能虚拟测试假体是否达到性能要求,以便最优化关节面重建和假体位置的确定;为医生提供技术支持,使外科手术更准确、更安全;促进外科手术向智能化、精准化、微创化方向发展。
在一些实施方式中,所述医学图像数据处理模块101包括:
三维重建单元,被配置为获得骨骼结构的三维影像;
图像分割单元,被配置为分割出股骨、胫骨、腓骨和髌骨;
识别标记单元,被配置为识别标记出关键轴线、关键解剖位点和关键解剖参数。
在一些实施方式中,基于深度学习的全膝关节置换术的术前规划系统还包括数据导入模块404,被配置为导入医学图像数据。
在一些实施方式中,基于深度学习的全膝关节置换术的术前规划系统还包括可视化术后模拟模块501,被配置为模拟全膝关节置换术的术后肢体运动情况。
在一些实施方式中,所述基于深度学习的全膝关节置换术的术前规划系统还包括图像组合模块,被配置为将骨骼结构任意组合。在一些实施方式中,基于深度学习的全膝关节置换术的术前规划系统还包括图像透明度切换模块,被配置为切换骨骼结构的透明度。在一些实施方式中,所述系统还包括图像缩放模块,被配置为缩放骨骼结构的三维影像、二维影像中 的至少一种。在一些实施方式中,基于深度学习的全膝关节置换术的术前规划系统还包括图像旋转模块,被配置为将图像按照任意轴进行旋转。在一些实施方式中,基于深度学习的全膝关节置换术的术前规划系统还包括图像移动模块,被配置为将图像进行移动。
在一些实施方式中,基于深度学习的全膝关节置换术的术前规划系统还包括数据导出模块,被配置为将符合手术需求的模拟匹配数据导出,形成术前规划报告。
除次之外,本系统可以实现的更多的功能或者更为具体的功能请见第一方面内容。
本公开在第三方面提供了一种设备,包括:
一个或多个处理器;
存储装置,被配置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本公开在第一方面提供的基于深度学习的全膝关节置换术的术前规划方法。
本公开在第四方面提供了一种计算机可读存储介质,其上存储有计算机程序,
所述计算机程序被处理器执行时实现本公开在第一方面提供的基于深度学习的全膝关节置换术的术前规划方法。
以下再结合附图3至图11进行更为具体的说明:
导入数据:利用数据导入模块404将CT扫描获得的下肢全长dicom数据导入基于深度学习的全膝关节置换术的术前规划系统中。
基于深度学习的医学图像数据处理:利用医学图像数据处理模块101进行该步骤,通过医学图像数据处理获得骨骼结构的三维影像和二维影像、识别标记出关键轴线、关键解剖位点关键解剖参数;所述骨骼结构包括股骨、胫骨、腓骨和髌骨;关键解剖位点包括股骨髓腔的不同层面上的中心点、胫骨髓腔的不同层面上的中心点、髋关节中心点、膝关节中心点、髁间棘的中心点、踝关节中心点,还包括股骨内髁凹点、股骨外髁最高点、股骨内外后髁最低点、胫骨平台内侧低点和外侧高点、后交叉韧带中点和胫骨结节内侧缘点、股骨远端最低点;所述关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴、胫骨机械轴,还包括通髁线、后髁连线、胫骨膝关节线、股骨矢状轴、股骨膝关节线中的任一种或多种;所述关键解剖参数包括胫股角和远端股骨角,还包括股骨后髁角。
可选地,该步骤包括:
骨骼三维影像重建的步骤
利用三维重建单元,根据下肢全长dicom数据进行三维影像重建,获得下肢骨骼三维影像,可以通过显示模块301显示出来。三维影像重建可以利用现有方法实现,因此,三维重建单元可以为现有的能够实现三维影像重建的单元。
基于深度学习的图像分割的步骤
按照如下方法实现股骨、胫骨、腓骨和髌骨这四类骨骼结构的分割:
构建下肢医学图像数据库:获取下肢CT图像数据集,手动标注出股骨、胫骨、腓骨和髌骨区域;将数据集按照7:3的比例划分为训练集和测试集;将未标注前的二维横断面影像dicom数据转换成jpg格式的图片并保存,将标注后的数据转换成png格式的图片并保存。此处以二维横断面数据进行说明,还可以使用二维矢状面和二维冠状面数据。
建立分割神经网络模型,分割神经网络模型为Unet+PointRend,利用Unet神经网络进行粗分割,利用PointRend神经网络进行精确分割;所述Unet神经网络包括4个上采样层和4个下采样层;每个上采样层包括1个上采样操作层和2个卷积层,其中的卷积层中的卷积核大小为3*3,上采样操作层中的卷积核大小为2*2,每个上采样层中的卷积核个数为512,256, 256,128;每个下采样层包括2个卷积层和1个池化层,其中的卷积层中的卷积核大小为3*3,池化层中的卷积核大小为2*2,每个卷积层中的卷积核的个数为128,256,256,512;最后一次上采样结束后设有一个dropout层,dropout率设置为0.5-0.7;所有的卷积层后面设有激活层,激活层使用的激活函数为relu函数。
模型训练,包括:
粗分割训练:将训练集全部送入Unet神经网络进行训练;训练过程中,数据标签的背景像素值设置为0,股骨为1,胫骨为2,腓骨为3,髌骨为4,训练的批尺寸batch_size为6,学习率设置为1e-4,优化器使用Adam优化器,使用的损失函数为DICE loss,根据训练过程中损失函数的变化,调整训练的批尺寸;
精确分割训练:完成粗分割后,送入PointRend神经网络进行精确分割;初始过程包括,先使用双线性插值上采样粗分割的预测结果,再在特征图中选定多个置信度为0.5的点作为预设置信度的点,然后通过双线性插值Bilinear计算多个点的特征表示并且预测点所属的标签labels;重复所述初始过程,直到上采样到预测结果的置信度达到目标置信度。
利用训练好的分割神经网络模型进行分割。
上述分割过程可在图像分割单元中实现,分割出的这四类骨骼结构无连结,并且边缘清晰。
基于深度学习的识别标记的步骤
步骤包括:
(1)识别关键解剖位点。
识别关键解剖位点的步骤包括:
构建数据库:获取下肢医学图像数据集,手动标定关键点;将所述数据集按照7:3的比例划分为训练集和测试集。
建立识别神经网络模型:所述识别神经网络模型为hourglass,hourglass的网络细节在此不再详述。
模型训练:在进行训练时,输入像素值为0-255的正投影图像和label.txt,可以通过每张图片的名称找到互相对应的点的坐标;若直接用目标点的坐标进行学习,神经网络需要自行将空间位置转换为坐标,是一种比较难学习的训练方式,所以将这些点生成高斯图,用heatmap去监督,即网络的输出是一个与输入大小相同尺寸的特征图,在检测点的位置为1,其他位置为0,多个点的检测就输出多个通道的特征图;网络使用Adam优化,学习率为1e-5,批尺寸batch_size为4,损失函数使用L2正则化,可以根据训练过程中损失函数的变化,调整训练的批尺寸,得到关键点位的坐标值。
利用训练好的识别神经网络模型进行关键解剖位点的识别。
(2)利用关键解剖位点获得关键轴线:
对于股骨解剖轴,可以通过拟合股骨髓腔的不同层面上的中心点而得到。对于胫骨解剖轴,可以通过拟合胫骨髓腔的不同层面上的中心点而得到。拟合的方法为最小二乘法、梯度下降、高斯牛顿、列-马算法中的任一种。
对于其它种类的关键轴线,可以利用确定的两个端点而得到。如,股骨机械轴的两个端点-髋关节中心点和膝关节中心点-已被识别出来,可以通过这两点确定股骨机械轴线。
(3)测量关键解剖参数。
上述识别标记步骤在识别标记单元实现。
本公开对医学图像数据处理步骤所包括的三个步骤没有顺序上的限定。本公开在此处为 了具体说明医学图像数据处理的步骤而给出了包含顺序的处理步骤,但不应理解为处理顺序的限定。
四类骨骼结构(股骨、胫骨、腓骨和髌骨)通过图像组合模块可以进行任意组合,通过图像透明度切换模块可以进行透明度的变换,通过图像缩放模块可以进行图像缩放、通过图像旋转模块可以进行图像旋转。图3为分割后四类骨骼组合在一起的三维影像,显影效果为不透明(可切换为透明状态),其中a图和b图的角度不同,在观察时可以选择不同的角度进行观察。由于本公开将股骨、胫骨、腓骨和髌骨这四类骨骼结构进行了分割,显然,这四类股骨结构可以任意进行组合。图4为只显示股骨的股骨三维影像,显影效果为不透明(可切换为透明状态),其中a图和b图的角度不同。图5为只显示胫骨的胫骨三维影像,显影效果为不透明(可切换为透明状态),其中a图和b图的角度不同。此处只结合附图列举了四类骨骼组合在一起显示、只显示股骨、只显示胫骨的情况,还可以只显示腓骨,还可以只显示髌骨,还可以同时显示股骨和胫骨等。图6为图5b胫骨平台处的放大图。当然,任意的组合方式下的三维影像均可进行放大或缩小。如,当只显示股骨时,可以进行放大或缩小。如,同时显示股骨和胫骨时,可以进行放大或缩小。同时显示股骨、胫骨和腓骨时,可以进行放大或缩小。同时显示股骨、胫骨、腓骨和髌骨时,可以进行放大或缩小。
图7显示了标记有关键轴线、关键解剖位点和关键解剖参数后的结果图。可以观察各个关键解剖位点、关键轴线中至少一个的位置是否正确,若不正确,可以手动标记关键解剖位点、关键轴线中的至少一个(通过手动标记关键解剖位点而实现)。
可视化模拟匹配
将三维假体与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;当模拟匹配效果符合手术要求时,视为完成模拟匹配。三维假体包括三维股骨假体和三维胫骨假体;该步骤可以具体按照如下方法进行:
假体植入:根据前期的分割识别标记结果,自动将三维股骨假体植入股骨,将三维胫骨假体植入胫骨,将胫骨垫植入假体间隙;
假体选择:选择三维股骨假体的类型和型号,调整其三维空间位置;选择三维股骨假体的类型和型号,调整其三维空间位置;胫骨垫的类型和型号;选择模拟手术条件,模拟手术条件包括股骨手术参数和胫骨手术参数,股骨手术参数包括股骨远端截骨量、股骨后髁截骨量、内外旋角、内外翻角和股骨假体屈曲角;胫骨手术参数包括胫骨截骨量、内外旋角、内外翻角和后倾角;
模拟截骨:根据三维假体与骨骼的匹配关系智能截骨,观察模拟匹配效果;
可以在如下一个或多个状态下观察模拟匹配效果:
(a)截骨状态或非截骨状态;
(b)骨骼透明状态或不透明状态;
(c)腓骨显示或不显示状态;
若是模拟匹配效果不符合手术需求,则重复所述假体选择和所述模拟截骨的步骤:重新选择假体类型、型号、模拟手术条件中的至少一种,然后进行模拟截骨,观察模拟匹配效果,直至模拟匹配效果符合手术要求。
可视化模拟匹配的步骤在模拟匹配模块201中进行,图8显示了模拟匹配的界面,状态是截骨前,显影效果为透明(可切换)。图9显示了截骨后的结果图,显影效果为不透明(可切换)。在模拟匹配的过程中,如图10所示,可以利用图像旋转模块调节图像角度,多方位进行观察。
术后模拟
利用术后模拟模块进行术后模拟501,如图11所示,观察截骨后假体与股骨和胫骨的整体匹配效果,观察全膝关节置换术术后肢体运动情况(图中未示出)。
此外,在完成术后模拟后,还可以利用数据导出模块将术前规划的数据导出,这些数据包括可视化模拟匹配过程中的假体(股骨、胫骨和胫骨垫)类型和型号、模拟手术条件,形成术前规划报告。
图12位本公开的实施例提供的一种设备的结构示意图,该设备包括存储器10、处理器20、输入装置30和输出装置40。设备中的处理器20的数量可以是一个或多个,图12中以一个处理器20为例;设备中的存储器10、处理器20、输入装置30和输出装置40可以通过总线或其它方式连接,图12中以通过总线50连接为例。
存储器10作为一种计算机可读存储介质,可被配置为存储软件程序、计算机可执行程序以及模块,如本公开实施例中的基于深度学习的全膝关节置换术的术前规划方法对应的程序指令/模块。处理器20通过运行存储在存储器10中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的术前规划方法。
存储器10可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储器10可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器10可进一步包括相对于处理器20远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置30可被配置为接收输入的数字或字符信息,以及产生与装置的用户设置以及功能控制有关的键信号输入。输出装置40可包括显示屏等显示设备。
本公开实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种基于深度学习的全膝关节置换术的术前规划方法,该方法包括:
基于深度学习的医学图像数据处理的步骤,通过所述医学图像数据处理获得四类骨骼结构的三维影像、识别标记出关键轴线、关键解剖位点、关键解剖参数;四类骨骼结构包括股骨、胫骨、腓骨和髌骨;关键解剖位点包括股骨髓腔的不同层面上的中心点、胫骨髓腔的不同层面上的中心点、髋关节中心点、膝关节中心点、髁间棘的中心点、踝关节中心点;关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴;所述关键解剖参数包括胫股角和远端股骨角;更为具体的方法见第一方面内容;
可视化模拟匹配的步骤,将三维假体与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;当模拟匹配效果符合手术要求时,视为完成模拟匹配。更为具体的方法见第一方面内容。
当然,本公开所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本公开任意一种基于深度学习的全膝关节置换术的术前规划方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本公开可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。依据这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机 的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述的方法。
以上实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的范围。

Claims (15)

  1. 一种基于深度学习的全膝关节置换术的术前规划方法,所述方法基于患者下肢医学图像数据,所述方法包括:
    基于深度学习的医学图像数据处理的步骤,通过所述医学图像数据处理获得骨骼结构的三维影像、识别标记出关键轴线、关键解剖位点和关键解剖参数;所述骨骼结构包括股骨、胫骨、腓骨和髌骨;所述关键解剖位点包括股骨髓腔的不同层面上的中心点、胫骨髓腔的不同层面上的中心点、髋关节中心点、膝关节中心点、髁间棘的中心点和踝关节中心点;所述关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴;所述关键解剖参数包括胫股角和远端股骨角;
    可视化模拟匹配的步骤,将三维假体与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;当模拟匹配效果符合手术要求时,视为完成模拟匹配。
  2. 根据权利要求1所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    所述医学图像数据处理的步骤包括骨骼三维影像重建的步骤;图像分割的步骤;识别标记关键轴线、关键解剖位点和关键解剖参数的步骤。
  3. 根据权利要求2所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    所述图像分割基于深度学习进行,所述图像分割的步骤包括:
    构建下肢医学图像数据库:获取下肢医学图像数据集,手动标注出股骨、胫骨、腓骨和髌骨区域;将所述数据集划分为训练集和测试集;将未标注前的医学图像数据转换成第一格式的图片并保存,将标注后的数据转换成第二格式的图片并保存;
    建立分割神经网络模型;所述分割神经网络模型包括级联的粗分割神经网络和精确分割神经网络;所述粗分割神经网络作为主干网络进行粗分割,所述精确分割神经网络基于所述粗分割进行精确分割;所述粗分割神经网络选自FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask、DeepLabV1、DeepLabV2、DeepLabV3中的至少一种;所述精确分割神经网络选自EEfficientDet、SimCLR、PointRend中至少一种;
    模型训练:利用训练集对分割神经网络模型进行训练,并利用测试集进行测试;和
    利用训练好的分割神经网络模型进行分割。
  4. 根据权利要求3所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    所述粗分割神经网络采用Unet神经网络;
    所述Unet神经网络包括n个上采样层和n个下采样层,n的取值为2-8;
    每个上采样层包括1个上采样操作层和2个卷积层,其中的卷积层中的卷积核大小为3*3,上采样操作层中的卷积核大小为2*2,每个上采样层中的卷积核个数为512,256,256,128;
    每个下采样层包括2个卷积层和1个池化层,其中的卷积层中的卷积核大小为3*3,池化层中的卷积核大小为2*2,每个卷积层中的卷积核的个数为128,256,256,512;
    所述方法还包括执行以下至少一种操作:
    最后一次上采样结束后设有一个dropout层,dropout率设置为0.5-0.7;
    所有的卷积层后面设有激活层,激活层使用的激活函数为relu函数。
  5. 根据权利要求3所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    所述训练按照如下方法进行:
    粗分割:将训练集送入粗分割神经网络进行训练;训练过程中,数据标签的背景像素值设置为0,股骨为1,胫骨为2,腓骨为3,髌骨为4,训练的批尺寸batch_size为6,学习率设置为1e-4,优化器使用Adam优化器,使用的损失函数为DICE loss,根据训练过程中损失函数的变化,调整训练的批尺寸;
    精确分割:送入精确分割神经网络进行精确分割;初始过程包括,使用双线性插值上采样粗分割的预测结果,在特征图中选定多个置信度为预设置信度的点,通过双线性插值Bilinear计算多个点的特征表示并且预测点所属的标签labels;重复所述初始过程,直到上采样到预测结果的置信度达到目标置信度。
  6. 根据权利要求2所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    识别标记关键轴线、关键解剖位点和关键解剖参数基于深度学习进行,该步骤包括:
    识别关键解剖位点;利用MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种识别神经网络模型识别关键解剖位点;
    利用关键解剖位点获得关键轴线;和
    测量关键解剖参数。
  7. 根据权利要求6所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    所述识别关键解剖位点包括:
    构建数据库:获取下肢医学图像数据集,手动标定关键解剖位点;将所述数据集划分为训练集和测试集;
    建立识别神经网络模型;
    模型训练:利用训练集对识别神经网络模型进行训练,并利用测试集进行测试;
    利用训练好的识别神经网络模型进行关键解剖位点的识别。
  8. 根据权利要求6所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    所述利用关键解剖位点获得关键轴线包括:
    对于股骨解剖轴,通过拟合股骨髓腔的不同层面上的中心点而得到;
    对于胫骨解剖轴,通过拟合胫骨髓腔的不同层面上的中心点而得到;
    其中,所述拟合的方法为最小二乘法、梯度下降、高斯牛顿、列-马算法中的任一种。
  9. 根据权利要求1所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    所述三维假体包括三维股骨假体和三维胫骨假体;和
    所述模拟匹配包括:
    假体植入:自动将三维股骨假体植入股骨,将三维胫骨假体植入胫骨;
    假体选择:选择三维股骨假体和三维胫骨假体,选择模拟手术条件;
    模拟截骨:根据三维假体与骨骼的匹配关系智能截骨,观察三维假体与骨骼的模拟匹配效果;
    若是模拟匹配效果不符合手术需求,则重复所述假体选择和所述模拟截骨的步骤,直至模拟匹配效果符合手术要求。
  10. 根据权利要求9所述的基于深度学习的全膝关节置换术的术前规划方法,其中,
    所述假体选择的步骤包括如下三维股骨假体选择的步骤、三维胫骨假体选择的步骤、模 拟手术条件选择的步骤中的至少一种:
    三维股骨假体选择的步骤:选择三维股骨假体包括选择股骨假体类型、股骨假体型号、股骨假体的三维空间位置中的至少一种;
    三维胫骨假体选择的步骤:选择三维胫骨假体包括选择胫骨假体类型、胫骨假体型号、胫骨假体的三维空间位置中的至少一种;
    模拟手术条件选择的步骤:选择模拟手术条件包括选择股骨手术参数、选择胫骨手术参数中的至少一种;所述股骨手术参数包括股骨远端截骨量、股骨后髁截骨量、内外旋角、内外翻角和股骨假体屈曲角;所述胫骨手术参数包括胫骨截骨量、内外旋角、内外翻角和后倾角。
  11. 根据权利要求1所述的基于深度学习的全膝关节置换术的术前规划方法,还包括:
    对至少一种骨骼结构进行显示,并执行以下操作方式中至少一种:
    透明度的切换、图像缩放、图像旋转、图像移动;
    所述透明度包括透明和不透明;
    所述关键解剖位点还包括股骨内髁凹点、股骨外髁最高点、股骨内外后髁最低点、胫骨平台内侧低点和外侧高点、后交叉韧带中点和胫骨结节内侧缘点、股骨远端最低点;所述关键轴线还包括通髁线、后髁连线、胫骨膝关节线、股骨矢状轴、股骨膝关节线中的任一种或多种;所述关键解剖参数还包括股骨后髁角。
  12. 根据权利要求11所述的基于深度学习的全膝关节置换术的术前规划方法,还包括:
    在透明度为不透明的状态下标记出关键轴线;
    在标记关键轴线后,观察关键轴线、关键解剖位点中的至少一个是否对位,并将不对位的关键轴线、关键解剖位点中的至少一个进行手动标记;独立显示出股骨、胫骨中至少一种,通过旋转调整股骨、胫骨中至少一种的观察角度,然后再进行关键轴线、关键解剖位点中的至少一个的手动标记。
  13. 根据权利要求1所述的基于深度学习的全膝关节置换术的术前规划方法,所述方法还包括:
    可视化术后模拟的步骤,以模拟全膝关节置换术的术后肢体运动情况;
    将符合手术需求的模拟匹配数据导出,形成术前规划报告的步骤,以便于医生进行术前部署。
  14. 一种基于深度学习的全膝关节置换术的术前规划系统,所述系统包括:
    医学图像数据处理模块,被配置为获得骨骼结构的三维影像、识别标记出关键轴线、关键解剖位点和关键解剖参数;所述骨骼结构包括股骨、胫骨、腓骨和髌骨;所述关键轴线包括股骨解剖轴、股骨机械轴、胫骨解剖轴和胫骨机械轴;所述关键解剖参数包括胫股角和远端股骨角;
    模拟匹配模块,将三维假体与三维股骨和三维胫骨进行模拟匹配,实时观察模拟匹配效果;和
    显示模块:被配置为显示骨骼结构的三维影像、关键轴线、关键解剖位点、关键解剖参数和模拟匹配效果。
  15. 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现权利要求1至13任一项所述的基于深度学习的全膝关节置换术的术前规划方法。
PCT/CN2021/113946 2020-08-22 2021-08-23 基于深度学习的全膝关节置换术的术前规划方法、系统和介质 WO2022042459A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010852941 2020-08-22
CN202010852941.3 2020-08-22

Publications (1)

Publication Number Publication Date
WO2022042459A1 true WO2022042459A1 (zh) 2022-03-03

Family

ID=76458778

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113946 WO2022042459A1 (zh) 2020-08-22 2021-08-23 基于深度学习的全膝关节置换术的术前规划方法、系统和介质

Country Status (2)

Country Link
CN (1) CN113017829B (zh)
WO (1) WO2022042459A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114587583A (zh) * 2022-03-04 2022-06-07 杭州湖西云百生科技有限公司 膝关节手术导航系统术中假体推荐方法及系统
CN115393272A (zh) * 2022-07-15 2022-11-25 北京长木谷医疗科技有限公司 基于深度学习的膝关节髌骨置换三维术前规划系统及方法
CN116687434A (zh) * 2023-08-03 2023-09-05 北京壹点灵动科技有限公司 对象的术后角度的确定方法、装置、存储介质和处理器
CN116747026A (zh) * 2023-06-05 2023-09-15 北京长木谷医疗科技股份有限公司 基于深度强化学习的机器人智能截骨方法、装置及设备
CN116934708A (zh) * 2023-07-20 2023-10-24 北京长木谷医疗科技股份有限公司 胫骨平台内外侧低点计算方法、装置、设备及存储介质
CN117671221A (zh) * 2024-02-01 2024-03-08 江苏一影医疗设备有限公司 基于膝关节有限角图像的数据修正方法、装置及存储介质
TWI838199B (zh) * 2023-03-31 2024-04-01 慧術科技股份有限公司 醫學靜態圖片對照教學系統及其方法
CN118000908A (zh) * 2024-04-09 2024-05-10 北京天智航医疗科技股份有限公司 全膝关节置换规划方法、装置、设备及存储介质

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170128B (zh) * 2020-08-21 2023-05-30 张逸凌 基于深度学习的骨骼分割方法和系统
CN113017829B (zh) * 2020-08-22 2023-08-29 张逸凌 一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备
CN112957126B (zh) * 2021-02-10 2022-02-08 北京长木谷医疗科技有限公司 基于深度学习的单髁置换术前规划方法和相关设备
CN113633379A (zh) * 2021-07-30 2021-11-12 天津市天津医院 下肢机械轴导航系统、下肢手术导航方法以及存储介质
CN113744214B (zh) * 2021-08-24 2022-05-13 北京长木谷医疗科技有限公司 基于深度强化学习的股骨柄放置装置及电子设备
CN113842211B (zh) * 2021-09-03 2022-10-21 北京长木谷医疗科技有限公司 膝关节置换的三维术前规划系统及假体模型匹配方法
CN113974920B (zh) * 2021-10-08 2022-10-11 北京长木谷医疗科技有限公司 膝关节股骨力线确定方法和装置、电子设备、存储介质
CN113907774A (zh) * 2021-10-13 2022-01-11 瓴域影诺(北京)科技有限公司 一种测量下肢力线的方法及装置
CN113850810B (zh) * 2021-12-01 2022-03-04 杭州柳叶刀机器人有限公司 用于转正股骨的方法及手术系统、存储介质以及电子设备
CN113870261B (zh) * 2021-12-01 2022-05-13 杭州柳叶刀机器人有限公司 用神经网络识别力线的方法与系统、存储介质及电子设备
CN114463414A (zh) * 2021-12-13 2022-05-10 北京长木谷医疗科技有限公司 膝关节外旋角测量方法、装置、电子设备及存储介质
CN114419618B (zh) * 2022-01-27 2024-02-02 北京长木谷医疗科技股份有限公司 基于深度学习的全髋关节置换术前规划系统
CN114612400A (zh) * 2022-03-02 2022-06-10 北京长木谷医疗科技有限公司 基于深度学习的膝关节股骨置换术后评估系统
CN114693602B (zh) * 2022-03-02 2023-04-18 北京长木谷医疗科技有限公司 膝关节动张力平衡态评估方法及装置
CN114504384B (zh) * 2022-03-25 2022-11-18 中国人民解放军陆军军医大学第二附属医院 一种激光截骨手术机器人的膝关节置换方法及装置
CN114431957B (zh) * 2022-04-12 2022-07-29 北京长木谷医疗科技有限公司 基于深度学习的全膝关节置换术后翻修术前规划系统
CN115005977A (zh) * 2022-05-20 2022-09-06 长春理工大学 一种膝关节置换手术术前规划方法
CN115486939A (zh) * 2022-08-31 2022-12-20 北京长木谷医疗科技有限公司 骨科机手术器人智能感知解剖结构的方法、装置及系统
CN115381553B (zh) * 2022-09-21 2023-04-07 北京长木谷医疗科技有限公司 复杂性骨性融合膝关节的智能定位装置设计方法及系统
CN115607286B (zh) * 2022-12-20 2023-03-17 北京维卓致远医疗科技发展有限责任公司 基于双目标定的膝关节置换手术导航方法、系统及设备
CN116883326A (zh) * 2023-06-21 2023-10-13 北京长木谷医疗科技股份有限公司 膝关节解剖位点识别方法、装置、设备及可读存储介质
CN116898574B (zh) * 2023-09-06 2024-01-09 北京长木谷医疗科技股份有限公司 人工智能膝关节韧带重建术的术前规划方法、系统及设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111249002A (zh) * 2020-01-21 2020-06-09 北京天智航医疗科技股份有限公司 全膝关节置换的术中规划调整方法、装置及设备
CN111297478A (zh) * 2020-03-10 2020-06-19 南京市第一医院 一种膝关节翻修手术的术前规划方法
CN113017829A (zh) * 2020-08-22 2021-06-25 张逸凌 一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1697874B8 (en) * 2003-02-04 2012-03-14 Mako Surgical Corp. Computer-assisted knee replacement apparatus
US9345548B2 (en) * 2006-02-27 2016-05-24 Biomet Manufacturing, Llc Patient-specific pre-operative planning
JP5171193B2 (ja) * 2007-09-28 2013-03-27 株式会社 レキシー 人工膝関節置換手術の術前計画用プログラム
CN107569310B (zh) * 2010-08-13 2020-11-17 史密夫和内修有限公司 用于优化骨科流程参数的系统和方法
CN103796609A (zh) * 2011-07-20 2014-05-14 史密夫和内修有限公司 用于优化植入物与解剖学的配合的系统和方法
AU2015320707B2 (en) * 2014-09-24 2020-07-02 Depuy Ireland Unlimited Company Surgical planning and method
US9532845B1 (en) * 2015-08-11 2017-01-03 ITKR Software LLC Methods for facilitating individualized kinematically aligned total knee replacements and devices thereof
CA3016604A1 (en) * 2016-03-12 2017-09-21 Philipp K. Lang Devices and methods for surgery
CA3024840A1 (en) * 2016-05-27 2017-11-30 Mako Surgical Corp. Preoperative planning and associated intraoperative registration for a surgical system
CN111166474B (zh) * 2019-04-23 2021-08-27 艾瑞迈迪科技石家庄有限公司 一种关节置换手术术前的辅助诊查方法和装置
CN110782976B (zh) * 2019-10-17 2022-06-28 北京大学 一种全膝关节置换术假体型号预测方法
CN111134840B (zh) * 2019-12-28 2020-11-20 元化智能科技(深圳)有限公司 膝关节置换手术方案的生成装置和终端
CN111563906A (zh) * 2020-05-07 2020-08-21 南开大学 一种基于深度卷积神经网络的膝关节磁共振图像自动分割方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111249002A (zh) * 2020-01-21 2020-06-09 北京天智航医疗科技股份有限公司 全膝关节置换的术中规划调整方法、装置及设备
CN111297478A (zh) * 2020-03-10 2020-06-19 南京市第一医院 一种膝关节翻修手术的术前规划方法
CN113017829A (zh) * 2020-08-22 2021-06-25 张逸凌 一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KIRILLOV ALEXANDER: "He Kaiming’s Team Release A New Open Source Image Segmentation Algorithm PointRend: Performance is Significantly Improved, and the Computing Power is Only 2.6% of Mask R-CNN", INFOQ, 29 February 2020 (2020-02-29), pages 1 - 12, XP055903553, Retrieved from the Internet <URL:https://www.infoq.cn/article/tvxqierzdis5milerjin> [retrieved on 20220321] *
NATIONAL CLINICAL RESEARCH CENTER FOR ORTHOPEDICS, SPORTS MEDICINE & REHABILITATION: "The Department of Orthopaedic Medicine of the Chinese People's Liberation Army General Hospital uses the artificial intelligence 3D planning system to accurately complete the total knee replacement surgery", DJKPAI.COM, 28 July 2020 (2020-07-28), pages 1 - 3, XP055903541, Retrieved from the Internet <URL:http://www.djkpai.com/ai/170738.jhtml> [retrieved on 20220321] *
TOLPADI ANIKET A., LEE JINHEE J., PEDOIA VALENTINA, MAJUMDAR SHARMILA: "Deep Learning Predicts Total Knee Replacement from Magnetic Resonance Images", SCIENTIFIC REPORTS, vol. 10, no. 6371, 1 December 2020 (2020-12-01), pages 1 - 12, XP055903554, DOI: 10.1038/s41598-020-63395-9 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114587583A (zh) * 2022-03-04 2022-06-07 杭州湖西云百生科技有限公司 膝关节手术导航系统术中假体推荐方法及系统
CN115393272A (zh) * 2022-07-15 2022-11-25 北京长木谷医疗科技有限公司 基于深度学习的膝关节髌骨置换三维术前规划系统及方法
TWI838199B (zh) * 2023-03-31 2024-04-01 慧術科技股份有限公司 醫學靜態圖片對照教學系統及其方法
CN116747026A (zh) * 2023-06-05 2023-09-15 北京长木谷医疗科技股份有限公司 基于深度强化学习的机器人智能截骨方法、装置及设备
CN116934708A (zh) * 2023-07-20 2023-10-24 北京长木谷医疗科技股份有限公司 胫骨平台内外侧低点计算方法、装置、设备及存储介质
CN116687434A (zh) * 2023-08-03 2023-09-05 北京壹点灵动科技有限公司 对象的术后角度的确定方法、装置、存储介质和处理器
CN116687434B (zh) * 2023-08-03 2023-11-24 北京壹点灵动科技有限公司 对象的术后角度的确定方法、装置、存储介质和处理器
CN117671221A (zh) * 2024-02-01 2024-03-08 江苏一影医疗设备有限公司 基于膝关节有限角图像的数据修正方法、装置及存储介质
CN117671221B (zh) * 2024-02-01 2024-05-03 江苏一影医疗设备有限公司 基于膝关节有限角图像的数据修正方法、装置及存储介质
CN118000908A (zh) * 2024-04-09 2024-05-10 北京天智航医疗科技股份有限公司 全膝关节置换规划方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN113017829B (zh) 2023-08-29
CN113017829A (zh) 2021-06-25

Similar Documents

Publication Publication Date Title
WO2022042459A1 (zh) 基于深度学习的全膝关节置换术的术前规划方法、系统和介质
WO2022170768A1 (zh) 单髁关节图像的处理方法、装置、设备和存储介质
WO2022142741A1 (zh) 全膝关节置换术前规划方法和装置
WO2022183719A1 (zh) 基于深度学习的全髋关节置换翻修术前规划方法和设备
US11798688B2 (en) Systems and methods for simulating spine and skeletal system pathologies
JP2021013835A (ja) 無線超音波追跡および通信のためのウルトラワイドバンドの位置決め
Tsai et al. Virtual reality orthopedic surgery simulator
CN110430809A (zh) 用于外科、医疗和牙科手术的光学引导
AU2018342606A1 (en) Systems and methods for simulating spine and skeletal system pathologies
CN109310476A (zh) 用于手术的装置与方法
CN106963487B (zh) 一种膝关节盘状半月板模拟手术方法
CN107106239A (zh) 外科规划和方法
EP2522295A1 (en) Vitual platform for pre-surgery simulation and relative bio-mechanic validation of prothesis surgery of the lumbo-sacral area of the human spine
CN114494183A (zh) 一种基于人工智能的髋臼半径自动测量方法及系统
Ahrend et al. Development of generic Asian pelvic bone models using CT-based 3D statistical modelling
US20220249168A1 (en) Orthopaedic pre-operative planning system
WO2016110816A1 (en) Orthopedic surgery planning system
CA3145179A1 (en) Orthopaedic pre-operative planning system
CN109512513A (zh) 一种基于圆柱拟合的下肢胫骨机械轴线确定方法
De Momi et al. Hip joint anatomy virtual and stereolithographic reconstruction for preoperative planning of total hip replacement
Kang et al. Determining the location of hip joint centre: application of a conchoid's shape to the acetabular cartilage surface of magnetic resonance images
Chang et al. A pre‐operative approach of range of motion simulation and verification for femoroacetabular impingement
BUFORD Jr et al. A modeling and simulation system for the human hand
Valstar et al. Towards computer-assisted surgery in shoulder joint replacement
Mercader et al. Visualization of patient’s knee movement and joint contact area during knee flexion for orthopaedic surgery planing validation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21860295

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21860295

Country of ref document: EP

Kind code of ref document: A1