WO2024011943A1 - 基于深度学习的膝关节髌骨置换三维术前规划方法及系统 - Google Patents

基于深度学习的膝关节髌骨置换三维术前规划方法及系统 Download PDF

Info

Publication number
WO2024011943A1
WO2024011943A1 PCT/CN2023/082710 CN2023082710W WO2024011943A1 WO 2024011943 A1 WO2024011943 A1 WO 2024011943A1 CN 2023082710 W CN2023082710 W CN 2023082710W WO 2024011943 A1 WO2024011943 A1 WO 2024011943A1
Authority
WO
WIPO (PCT)
Prior art keywords
patella
point
dimensional
feature
model
Prior art date
Application number
PCT/CN2023/082710
Other languages
English (en)
French (fr)
Inventor
张逸凌
刘星宇
Original Assignee
北京长木谷医疗科技有限公司
张逸凌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京长木谷医疗科技有限公司, 张逸凌 filed Critical 北京长木谷医疗科技有限公司
Publication of WO2024011943A1 publication Critical patent/WO2024011943A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Definitions

  • This application relates to the field of computer technology, and in particular to a three-dimensional preoperative planning method and system for knee patella replacement based on deep learning.
  • This application provides a three-dimensional preoperative planning method and system for knee joint patella replacement based on deep learning to solve the shortcomings of the existing technology that cannot obtain accurate patellar information and provide accurate preoperative planning solutions for the knee joint patella. , to obtain accurate patellar information, and then provide accurate preoperative planning solutions for the knee joint patella.
  • This application provides a three-dimensional preoperative planning method for knee patella replacement based on deep learning.
  • the method includes:
  • first patella feature point includes a first upper pole point, a first lower pole point, a first lateral edge point, and a first medial edge point;
  • Three-dimensional reconstruction is performed based on the patella feature map to obtain a three-dimensional patella model, and based on the position information of the first patella feature point, the first patella feature point is projected onto the first surface of the three-dimensional patella model to obtain a third Two patellar characteristic points, wherein the second patellar characteristic point includes a second upper pole point, a second lower pole point, a second lateral edge point, and a second medial edge point;
  • a target osteotomy surface of the three-dimensional patellar model is determined.
  • the second patellar characteristic point also includes a plurality of first target points
  • the method further includes:
  • the first surface is divided into Four point candidate areas
  • any three point candidate areas from the four point candidate areas and select one point from the three point candidate areas as the first target point, based on the three first targets
  • the points determine a first plane, wherein the first plane is used to determine a target osteotomy surface of the three-dimensional patellar model.
  • the method before projecting the first patella feature point onto the first surface of the three-dimensional patella model based on the position information of the first patella feature point to obtain the second patella feature point, the method further includes :
  • the three-dimensional patella model is adjusted based on a correction line segment so that the first surface of the three-dimensional patella model is parallel to the coronal plane of the human body, wherein the correction line segment is formed by a connecting line between the second upper pole point and the second lower point. , and the connecting line between the second outer edge point and the second inner edge point.
  • determine the target osteotomy surface of the three-dimensional patellar model including:
  • a target osteotomy surface of the three-dimensional patellar model is determined; wherein the target osteotomy surface is parallel to the first plane.
  • perform image segmentation based on the medical image to obtain a patella feature map including:
  • Identify and mark the first patella characteristic point on the patella characteristic map including:
  • the patella feature map is input into a pre-trained point recognition model to obtain an image with the first patella feature point marked, wherein the point recognition model is a model trained based on the sample patella feature map.
  • the segmentation model includes: a deep convolutional neural network, an atrous spatial convolution pooling pyramid network, a first convolutional layer, a second convolutional layer, a third convolutional layer, a first pooling layer, and a third convolutional layer.
  • Second pooling layer and splicing layer
  • the bone feature map is input into the third convolution layer, and the image features output by the third convolution layer are input into the second pooling layer for upsampling to obtain a patella with the same size as the medical image. Feature map.
  • the point recognition model includes: a fourth convolution layer, a fifth convolution layer, a replication layer and a pooling layer;
  • a heat map is obtained, where the heat map includes pixel values that can characterize the patella. Pixel of feature point probability;
  • This application also provides a three-dimensional preoperative planning system for knee patella replacement based on deep learning.
  • the system includes:
  • the first acquisition module is configured to acquire a medical image of the knee joint, and perform image segmentation based on the medical image to obtain a patella feature map;
  • a marking module configured to identify and mark a first patella feature point on the patella feature map, wherein the first patella feature point includes a first upper pole point, a first lower pole point, a first lateral edge point and a first medial edge point;
  • a projection module configured to perform three-dimensional reconstruction based on the patella feature map to obtain a three-dimensional patella model, and project the first patella feature point onto the three-dimensional patella model based on the position information of the first patella feature point.
  • a second patella characteristic point is obtained, wherein the second patella characteristic point includes a second upper pole point, a second lower pole point, a second lateral edge point, and a second medial edge point;
  • the second acquisition module is configured to acquire the patella prosthesis based on the structural parameters of the three-dimensional patella model
  • a first determination module is configured to determine a target osteotomy surface of the three-dimensional patella model based on the patellar prosthesis and the second patella feature point.
  • This application also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, it implements any one of the above patellar image processing methods. Method steps.
  • This application also provides a non-transitory computer-readable storage medium on which a computer program is stored, When the computer program is executed by the processor, the steps of the patellar image processing method described in any one of the above are implemented.
  • the present application also provides a computer program product, including a computer program that, when executed by a processor, implements the steps of any one of the above patellar image processing methods.
  • This application provides a three-dimensional preoperative planning method and system for knee joint patella replacement based on deep learning.
  • a patella feature map is obtained, and the patella feature map is identified and Mark the first patella feature point, where the first patella feature point includes the first upper pole, the first lower pole, the first lateral edge point, and the first medial edge point, perform three-dimensional reconstruction based on the patella feature map, and obtain a three-dimensional patellar model.
  • the first patella feature point is projected onto the first surface of the three-dimensional patella model to obtain a second patella feature point, where the second patella feature point includes a second patella feature point.
  • the upper pole, second lower pole, second lateral edge point and second medial edge point are used to obtain the patellar prosthesis based on the structural parameters of the three-dimensional patellar model.
  • the target of the three-dimensional patellar model is determined. Osteotomy surface. In this way, the target osteotomy surface of the three-dimensional patellar model can be obtained, thereby providing an accurate preoperative planning solution for the knee joint patella.
  • Figure 1 is one of the flow diagrams of the three-dimensional preoperative planning method for knee patella replacement based on deep learning provided by this application;
  • Figure 2 is a schematic diagram of marking the second patellar characteristic point provided by this application.
  • Figure 3 is a schematic diagram of marking the first target point on the first surface provided by this application.
  • Figure 4 is a side view of marking the first target point on the first surface provided by the present application.
  • Figure 5 is the second schematic flow chart of the three-dimensional preoperative planning method for knee patella replacement based on deep learning provided by this application;
  • Figure 6 is a schematic structural diagram of the segmentation model provided by this application.
  • Figure 7 is a schematic structural diagram of the point recognition model provided by this application.
  • Figure 8 is a schematic structural diagram of the three-dimensional preoperative planning system for knee patella replacement based on deep learning provided by this application;
  • Figure 9 is a schematic structural diagram of an electronic device provided by this application.
  • this application provides a three-dimensional preoperative planning method, system, electronic device, and non-transient state for knee patella replacement based on deep learning.
  • Computer-readable storage media and computer program products The following describes a deep learning-based three-dimensional preoperative planning method for knee patellar replacement in this application with reference to Figure 1.
  • this application discloses a three-dimensional preoperative planning method for knee patella replacement based on deep learning.
  • the method includes:
  • the medical image of the knee joint includes: femur, tibia and patella.
  • image segmentation can be performed based on the medical image of the knee joint to obtain the patella feature map.
  • the patella is a feature map of the surface of the patella close to the tibia.
  • the first patella feature point After obtaining the patella feature map, in order to determine the target osteotomy surface, the first patella feature point can be identified on the patella feature map, and the first patella feature point can be marked on the patella feature map, that is, the marked first patella feature point is obtained Image.
  • the first patellar characteristic point includes a first upper pole point, a first lower pole point, a first lateral edge point and a first medial edge point.
  • a three-dimensional reconstruction can be performed based on the patella characteristic map to obtain a three-dimensional patella model.
  • multiple patella feature maps can be obtained, and then a three-dimensional patella model is formed based on stacking the multiple patella feature maps.
  • Vtk visualization toolkit
  • Vtk visualization toolkit
  • the first patella feature point can be projected onto the first surface of the three-dimensional patella model based on the position information of the first patella feature point, and the second patella feature point can be obtained , wherein the first surface of the three-dimensional patella model is the surface corresponding to the side of the patella close to the tibia, and the second patellar characteristic points include the second upper pole, the second lower pole, the second lateral edge point, and the second medial edge point.
  • the image coordinates of the first patella feature point in the patella feature map can be obtained. Then, based on the image coordinates and the correspondence between the patella surface in the patella feature map and the first surface of the three-dimensional patella model, the first patella feature point is projected onto the first surface of the three-dimensional patella model to obtain the second patella feature point.
  • the relative position information of each first patella feature point can be obtained, and then based on the relative position information and the correspondence between the patella surface in the patella feature map and the first surface of the three-dimensional patella model, the first patella The feature points are projected onto the first surface of the three-dimensional patella model to obtain second patella feature points.
  • the second patella characteristic points on the first surface of the three-dimensional patella model A are the second upper pole point 201, the second lower pole point 203, the second lateral edge point 202 and the second medial edge point. 204.
  • the patellar prosthesis can be obtained based on the structural parameters of the three-dimensional patellar model.
  • the second patella is marked on the first surface of the three-dimensional patella model.
  • the corresponding current prosthesis model can be selected from the preset patellar prosthesis library based on the current distance between the second lateral edge point and the second medial edge point, and then the prosthesis corresponding to the current prosthesis model can be used as The patella prosthesis corresponding to the three-dimensional patella model, wherein the preset patella prosthesis library includes the corresponding relationship between the distance and the prosthesis model. In this way, the patellar prosthesis to be used can be determined.
  • S105 Determine the target osteotomy surface of the three-dimensional patellar model based on the patellar prosthesis and the second patellar feature point.
  • the target osteotomy surface of the three-dimensional patellar model can be determined based on the patellar prosthesis and the second patella characteristic points, where the target osteotomy surface is Surface obtained after bone manipulation.
  • the target osteotomy surface of the three-dimensional patellar model can be determined based on the thickness of the patellar prosthesis and the second patellar feature point.
  • a simulated osteotomy can be performed based on the target osteotomy surface, so that the shape of the patella and the bone status of the patella can be accurately understood in a three-dimensional manner before surgery, and an accurate preoperative planning plan can be generated.
  • the above-mentioned second patella feature points may also include a plurality of first target points. After the first patella feature points are projected onto the first surface of the three-dimensional patella model, the second patella is obtained. After feature points, the above method can also include:
  • the first surface is divided into Four point candidate regions.
  • the second upper pole and the second lower pole After marking the second patella feature point on the first surface of the three-dimensional patellar model, you can connect the second upper pole and the second lower pole to obtain the connecting line between the second upper pole and the second lower pole, and connect the second lateral edge point Connect with the second medial edge point to obtain the connecting line between the second lateral edge point and the second medial edge point, so that the first surface of the three-dimensional patella model can be divided into four point candidate areas, namely the first point candidate area, The second point candidate area, the third point candidate area and the fourth point candidate area.
  • the connecting line between the second upper pole 201 and the second lower pole 203, and the connecting line between the second lateral edge point 202 and the second medial edge point 204 can be the three-dimensional patella.
  • the first surface of the bone model A is divided into four point candidate areas, namely the first point candidate area 310 , the second point candidate area 320 , the third point candidate area 330 and the fourth point candidate area 340 .
  • any three point candidate areas from the four point candidate areas and select one point from the three point candidate areas as the first target point, based on the three first targets
  • the points determine a first plane, wherein the first plane is used to determine a target osteotomy surface of the three-dimensional patellar model.
  • the first point candidate area 310 , the second point candidate area 320 , the third point candidate area 330 and the fourth point candidate area 340 can be selected.
  • the points are used as the first target points, namely the first target point 305, the first target point 306 and the first target point 307.
  • Figure 4 is a side view of the three-dimensional patella model A after marking the first target point.
  • the figure only the second upper pole 201, the second lower pole 203, the second lateral edge point 202 and the first target point 305 are drawn.
  • the second point candidate area, the third point candidate area and the fourth point candidate area may be selected from the first point candidate area, the second point candidate area, the third point candidate area and the fourth point candidate area, and Select one point respectively from the second point candidate area, the third point candidate area and the fourth point candidate area as the first target point. This is all reasonable.
  • the first plane can be determined based on the three first target points, where the first plane is used to determine the target osteotomy surface of the three-dimensional patellar model.
  • the second upper pole, the second lower pole and the connecting line between the second upper pole and the second lower pole can be selected.
  • the intersection point with the connecting line between the second outer edge point and the second inner edge point is used as the first target point, based on the second upper pole point, the second lower pole point, the connecting line between the second upper pole point and the second lower pole point and the second outer edge point.
  • the intersection point of the connecting line between the edge point and the second inner edge point is determined by The determined first plane is more accurate, so that a more accurate target osteotomy surface can be determined.
  • the above methods can also include:
  • the three-dimensional patella model is adjusted based on the correction line segment so that the first surface of the three-dimensional patella model is parallel to the coronal plane of the human body.
  • the first patella feature point can be projected onto the first surface of the three-dimensional patella model based on the position information of the first patella feature point to obtain the second patella feature point.
  • the line segment adjusts the three-dimensional patella model so that the first surface of the three-dimensional patellar model is parallel to the coronal plane of the human body. Since the correction line segment consists of the connecting line between the second superior pole point and the second inferior point, and the second lateral edge point and the second medial edge point is composed of connecting lines. Therefore, the correction line segment can represent the orientation information of the first surface.
  • the first patella feature point is projected onto the first surface of the three-dimensional patella model, thereby obtaining the second patella feature point. can be more accurate.
  • determining the target osteotomy surface of the three-dimensional patellar model based on the patellar prosthesis and the second patellar feature point may include:
  • the parameter information of the patella prosthesis can be obtained based on the patella prosthesis.
  • the preset patella prosthesis library stores the corresponding relationship between the patella prosthesis and the parameter information. Based on the current prosthesis model, obtain the patellar prosthesis to obtain parameter information of the patellar prosthesis.
  • the parameter information may be information corresponding to the thickness of the patellar prosthesis.
  • the osteotomy thickness value of the three-dimensional patellar model can be determined.
  • the information corresponding to the thickness of the patellar prosthesis can be used as the osteotomy thickness value of the three-dimensional patellar model.
  • the three first target points can be projected in a direction away from the first surface of the three-dimensional patellar model, and three third target points corresponding to the three first target points can be obtained.
  • S504 Determine the target osteotomy surface of the three-dimensional patella model based on the three second target points.
  • the target osteotomy surface of the three-dimensional patellar model can be determined based on the three second target points, where the target osteotomy surface is parallel to the first plane. This allows the target osteotomy surface of the three-dimensional patellar model to be determined.
  • patellar feature map which may include:
  • the medical image is input into the pre-trained segmentation model to obtain a patella feature map.
  • the medical image of the knee joint can be input into the pre-trained segmentation model.
  • the segmentation model can segment the medical image and output the patella feature map. In this way, the patella feature map can be obtained. .
  • the segmentation model is trained based on sample medical images.
  • the above-mentioned segmentation model may include: a deep convolutional neural network 601, an atrous spatial pyramid pooling (aspp) network (no label is set in the figure), a first convolution layer 602, The second convolution layer 603, the third convolution layer 606, the first pooling layer 604, the second pooling layer 607 and the splicing layer 605.
  • a deep convolutional neural network 601 an atrous spatial pyramid pooling (aspp) network (no label is set in the figure)
  • the deep convolutional neural network 601 is connected to the atrous spatial convolution pooling pyramid network and the first convolution layer 602, the atrous spatial convolution pooling pyramid network is connected to the second convolution layer 603, and the second convolution layer 603 is connected to
  • the first pooling layer 604 is connected to the first pooling layer 604 and the first convolution layer 602 to the splicing layer 605.
  • the splicing layer 605 is connected to the third convolution layer 606.
  • the third convolution layer 606 is connected to the second pooling layer 604. Layer 607 connection.
  • the dilated spatial convolution pooling pyramid network can be composed of one 1x1 convolution 608, three 3x3 dilated convolutions, namely dilated convolution 609, dilated convolution 610 and dilated convolution 611, and one global pooling 612.
  • the convolution layer 602 and the second convolution layer 603 may be 1x1 convolution
  • the third convolution layer 606 may be a 3x3 convolution.
  • the deep convolutional neural network 601, the atrous spatial convolution pooling pyramid (aspp) network and the first convolution layer 602 are the Encoder process, the second convolution layer 603, the third convolution layer 606,
  • the first pooling layer 604, the second pooling layer 607 and the splicing layer 605 are the Decoder process, that is, the feature restoration process.
  • the medical image into the deep convolutional neural network to extract low-level image features (Low Level Features), where the low-level image features can provide detailed information of the image.
  • the low-level image features output by the deep convolutional neural network are respectively input into the atrous spatial convolution pooling pyramid network and the first convolution layer to obtain the current low-level image features.
  • the high-level image features are input into the second convolutional layer, and the image features output by the second convolutional layer are input into the first pooling layer for upsampling to obtain the current high-level image features.
  • the current high-level image features and the current low-level image features are input into the splicing layer for splicing to obtain the bone feature map.
  • the bone feature map is input into the third convolution layer, and the image features output by the third convolution layer are input into the second pooling layer for processing. Upsample to obtain a patella feature map that is consistent with the size of the medical image.
  • low-level image features are input into the atrous space convolution pooling pyramid network to extract the semantic information of the image, and high-level image features 616 are obtained.
  • the atrous space convolution pooling pyramid network consists of a 1x1 convolution 608.
  • the sampling rates are respectively are 6, 12 and 18. Therefore, low-level image features can be sampled in parallel using atrous convolutions with different sampling rates, which can better capture the contextual information of the image.
  • the first convolution layer 602 is a 1x1 convolution, the number of channels of low-level image features can be reduced for subsequent feature splicing.
  • the high-level image features 616 are input to the second convolution layer 603.
  • the second convolution layer 603 is a 1x1 convolution
  • the number of channels of the high-level image features can be reduced for subsequent feature splicing.
  • the image features 614 output by the second convolution layer 603, that is, the high-level image features after reducing the number of channels, are input to the first pooling layer 604 for upsampling to obtain the current high-level image features.
  • the current high-level image features and the current low-level image features 613 are input into the splicing layer 605 for splicing to obtain the bone feature map 615. Splicing the current high-level image features and the current low-level image features 613 can improve the accuracy of the segmentation boundary.
  • the patella feature map is input into the pre-trained point recognition model to obtain an image with the first patella feature point marked.
  • the patella feature map After obtaining the patella feature map, in order to identify and mark the first patella feature point on the patella feature map, the patella feature map can be input into the pre-trained point recognition model, and the point recognition model can identify feature points based on the patella feature map. Recognize, thereby outputting an image with the first patellar feature point marked.
  • the point recognition model is a model trained based on the sample patella feature map.
  • the above point recognition model may include: a fourth convolution layer, a fifth convolution layer, a replication layer and a pooling layer.
  • the fourth convolution layer includes four convolutions, namely a first convolution 701 , a second convolution 702 , a third convolution 703 and a fourth convolution 704 .
  • the replication layer includes four replication structures, namely a first replication structure 715 , a second replication structure 714 , a third replication structure 713 and a fourth replication structure 712 .
  • the fifth convolution layer includes three convolutions, namely the fifth convolution 705, the sixth convolution 706 and the seventh convolution 707.
  • the pooling layer includes four pooling structures, namely the first pooling structure 711, the second pooling structure 711 and the seventh convolution 707. Pooling structure 710, third pooling structure 709 and fourth pooling structure 708.
  • the first convolution 701, the second convolution 702, the third convolution 703, the fourth convolution 704, the fifth convolution 705, the sixth convolution 706 and the seventh convolution 707 are connected in sequence, and the first pooling The structure 711, the second pooling structure 710, the third pooling structure 709 and the fourth pooling structure 708 are connected in sequence, the seventh convolution 707 is connected to the fourth pooling structure 708, and the first convolution 701 is connected to the first replication structure 715 is connected, the second convolution 702 is connected with the second replication structure 714, the third convolution 703 is connected with the The three replica structures 713 are connected, and the fourth convolution 704 is connected to the fourth replica structure 712 .
  • first convolution 701 There is a corresponding relationship between the first convolution 701, the first replication structure 715 and the first pooling structure 711.
  • second convolution 702 There is a corresponding relationship between the second convolution 702, the second replication structure 714 and the second pooling structure 710.
  • the third convolution 703 There is a corresponding relationship between the third replication structure 713 and the third pooling structure 709, and there is a corresponding relationship between the fourth convolution 704, the fourth replication structure 712 and the fourth pooling structure 708.
  • Input the patella feature map into the pre-trained point recognition model to obtain an image with the first patella feature point marked which may include:
  • the patella feature map is input into the fourth convolution layer for feature extraction to obtain the features to be copied. Input the features to be copied into the corresponding copy layer to copy the features to obtain the copied features.
  • the features to be copied are input into the fifth convolutional layer for feature extraction to obtain the features to be pooled. Add the features to be pooled and the copied features, and input them into the corresponding pooling layer to obtain the pooling features.
  • a heatmap is obtained. The point with the maximum probability value is selected from the heatmap as the first patella feature. point, and mark the first patella feature point, where the heat map includes pixels whose pixel values can represent the probability of the first patella feature point, and the maximum probability value point is the point with the largest pixel value.
  • the patella feature map is input to the first convolution 701, the second convolution 702, the third convolution 703 and the fourth convolution 704 of the fourth convolution layer to perform feature extraction in sequence.
  • the image features output by the first convolution can be input into the second convolution 702 and the first copy structure 715 .
  • the second convolution 702 performs feature extraction on the image features output by the first convolution 701 , and the image features output by the second convolution 702 can be input into the third convolution 703 and the second replication structure 714 .
  • the third convolution 703 performs feature extraction on the image features output by the second convolution 702 , and the image features output by the third convolution 703 can be input into the fourth convolution 704 and the third copy structure 713 .
  • the fourth convolution 704 performs feature extraction on the image features output by the third convolution 703 , and the image features output by the fourth convolution 704 can be input to the fifth convolution layer and the fourth replication structure 712 .
  • the fifth convolution 705, the sixth convolution 706 and the seventh convolution 707 of the fifth convolution layer sequentially extract the image features output by the fourth convolution 704 and add them to the image features output by the fourth copy structure 712. , and input the added image features into the fourth pooling structure 708 for upsampling.
  • the image features output by the fourth pooling structure 708 are similar to the image features output by the third replication structure 713. Add, and input the added image features into the third pooling structure 709 for upsampling.
  • the image features output by the third pooling structure 709 are added to the image features output by the second replication structure 714, and the added image features are input to the second pooling structure 710 for upsampling.
  • the image features output by the second pooling structure 710 are added to the image features output by the first replication structure 715, and the added image features are input to the first pooling structure 711 for upsampling, so that the first pooling structure 711
  • the output pooling features can be superimposed on all image features, retaining the image information of each size, and then through a 1x1 convolution, based on the pooling features, the heat of the pixels including the pixel value that can represent the probability of the first patellar feature point can be generated. map, and then you can select the point with the largest pixel value from the heat map, use the point with the maximum probability value as the first patella feature point, and mark the first patella feature point.
  • this application can input the medical image into the pre-trained segmentation model to obtain the patella feature map, and input the patella feature map into the pre-trained point recognition model to obtain an image including the patellar feature point.
  • pre-trained The trained segmentation model and the pre-trained point recognition model make it easier and faster to obtain an image containing the first patellar feature point for subsequent acquisition of patellar information.
  • the internal rotation or external rotation of the three-dimensional patellar prosthesis can be adjusted, and the anteversion or posterior inclination of the three-dimensional patellar prosthesis can also be adjusted.
  • the three-dimensional patellar prosthesis can be fine-tuned by 0.1 mm so that the three-dimensional patellar prosthesis is within a preset range, so that the position of the patella can be understood.
  • the patella image processing system provided by the present application is described below.
  • the patella image processing system described below and the patella image processing method described above can be mutually referenced.
  • this application discloses a patella image processing system, which includes:
  • the first acquisition module 810 is configured to acquire a medical image of the knee joint, and perform image segmentation based on the medical image to obtain a patella feature map.
  • Marking module 820 is configured to identify and mark a first patellar feature on the patellar feature map. Sign point.
  • the first patellar characteristic point includes a first upper pole point, a first lower pole point, a first lateral edge point and a first medial edge point.
  • the projection module 830 is configured to perform three-dimensional reconstruction based on the patella feature map to obtain a three-dimensional patella model, and project the first patella feature point to the three-dimensional patella model based on the position information of the first patella feature point. On the first surface, the second patellar characteristic point is obtained.
  • the second patellar characteristic point includes a second upper pole point, a second lower pole point, a second lateral edge point and a second medial edge point.
  • the second acquisition module 840 is configured to acquire the patella prosthesis based on the structural parameters of the three-dimensional patella model.
  • the first determination module 850 is configured to determine the target osteotomy surface of the three-dimensional patellar model based on the patellar prosthesis and the second patellar feature point.
  • the second patellar characteristic point may also include multiple first target points.
  • the above methods can also include:
  • the dividing module is configured to project the first patella feature point onto the first surface of the three-dimensional patella model to obtain the second patella feature point, based on the relationship between the second upper pole and the second lower pole.
  • the connecting lines between the second outer edge point and the second inner edge point divide the first surface into four point candidate regions.
  • the second determination module is configured to select any three point candidate areas from the four point candidate areas, and select one point from the three randomly selected point candidate areas as the first target point, A first plane is determined based on the three first target points.
  • the first plane is used to determine the target osteotomy surface of the three-dimensional patellar model.
  • the above method may also include:
  • the adjustment module is configured to, based on the position information of the first patella feature point, project the first patella feature point onto the first surface of the three-dimensional patella model to obtain the second patella feature point, based on the correction line segment Adjust the three-dimensional patella model so that the three-dimensional patella model The first surface is parallel to the coronal plane of the human body.
  • the correction line segment is composed of a connecting line between the second upper pole point and the second lower point, and a connecting line between the second outer edge point and the second inner edge point.
  • the above-mentioned first determination module 850 may include:
  • the acquisition unit is configured to acquire parameter information of the patella prosthesis based on the patella prosthesis.
  • the first determining unit is configured to determine the osteotomy thickness value of the three-dimensional patellar model based on the parameter information of the patellar prosthesis.
  • a projection unit configured to respectively project the three first target points in a direction away from the first surface of the three-dimensional patella model to obtain three second targets respectively corresponding to the three first target points. point, the distance value between each first target point and each second target point is the osteotomy thickness value;
  • the second determination unit is configured to determine the target osteotomy surface of the three-dimensional patella model based on the three second target points.
  • the target osteotomy surface is parallel to the first plane.
  • the above-mentioned first determination module 810 may include:
  • the first input unit is configured to input the medical image into a pre-trained segmentation model to obtain a patella feature map.
  • the segmentation model is a model trained based on sample medical images.
  • the above marking module 820 may include:
  • the second input unit is configured to input the patella feature map into a pre-trained point recognition model to obtain an image with the first patella feature point marked.
  • the point recognition model is a model trained based on the sample patella feature map.
  • the segmentation model may include: a deep convolutional neural network, an atrous spatial convolution pooling pyramid network, a first convolution layer, a second convolution layer, a third convolution layer, and a first pool. layer, the second pooling layer and the splicing layer;
  • the above-mentioned first input unit may include:
  • a first input subunit configured to input the medical image into the deep convolutional neural network to extract low-level image features
  • the second input subunit is configured to input the low-level image features into the atrous spatial convolution pooling pyramid network, extract the semantic information of the image, and obtain high-level image features;
  • the third input subunit is configured to input the low-level image features into the first convolution layer to obtain the current low-level image features
  • the fourth input subunit is configured to input the high-level image features into the second convolution layer, and input the image features output by the second convolution layer into the first pooling layer for upsampling, to obtain Current advanced image features;
  • the fifth input subunit is configured to input the current high-level image features and the current low-level image features into the splicing layer for splicing to obtain a bone feature map;
  • the sixth input subunit is configured to input the skeletal feature map into the third convolution layer, and input the image features output by the third convolution layer into the second pooling layer for upsampling, to obtain A characteristic map of the patella consistent with the dimensions of the medical image.
  • the above-mentioned point recognition model may include: a fourth convolution layer, a fifth convolution layer, a replication layer and a pooling layer;
  • the above-mentioned second input unit may include:
  • the seventh input subunit is configured to input the patella feature map into the fourth convolution layer for feature extraction to obtain features to be copied;
  • the eighth input subunit is configured to input the feature to be copied into the corresponding copy layer to perform feature copy, and obtain the copy feature;
  • the ninth input subunit is configured to input the features to be copied into the fifth convolution layer for feature extraction to obtain features to be pooled;
  • the tenth input subunit is configured to add the features to be pooled and the copied features, and input the corresponding pooling layer to obtain the pooling features. Based on the pooling features, a heat map is obtained, wherein, The heat map includes pixels whose pixel values can represent the probability of patella feature points;
  • Select a subunit configured to select the maximum probability value point from the heat map as the first patella bony feature point and mark the first patellar feature point.
  • the maximum probability value point is the point with the largest pixel value.
  • Figure 9 illustrates a schematic diagram of the physical structure of an electronic device.
  • the electronic device may include: a processor (processor) 910, a communication interface (Communications Interface) 920, a memory (memory) 930 and a communication bus 940.
  • the processor 910, the communication interface 920, and the memory 930 complete communication with each other through the communication bus 940.
  • the processor 910 can call the logic instructions in the memory 930 to execute the patellar image processing method provided by each of the above methods.
  • the above-mentioned logical instructions in the memory 930 can be implemented in the form of software functional units and can be stored in a computer-readable storage medium when sold or used as an independent product.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .
  • the present application also provides a computer program product.
  • the computer program product includes a computer program.
  • the computer program can be stored on a non-transitory computer-readable storage medium.
  • the computer program can Implement the deep learning-based three-dimensional preoperative planning method for knee patella replacement provided by each of the above methods.
  • the present application also provides a non-transitory computer-readable storage medium on which a computer program is stored.
  • the computer program is implemented when executed by the processor to perform deep learning-based knee joint patella replacement provided by each of the above methods.
  • the system embodiments described above are only illustrative.
  • the units described as separate components may or may not be physically separated.
  • the components shown as units may or may not be physical units, that is, they may be located in One location, or it can be distributed across multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. A person of ordinary skill in the art, without exerting creative efforts, It can be understood and implemented.
  • each embodiment can be implemented by software plus a necessary general hardware platform, and of course, it can also be implemented by hardware.
  • the computer software product can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., including a number of instructions to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods described in various embodiments or certain parts of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Graphics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Prostheses (AREA)

Abstract

本申请提供一种基于深度学习的膝关节髌骨置换三维术前规划方法和系统,通过获取膝关节的医学图像,并基于医学图像进行图像分割,得到髌骨特征图,在髌骨特征图上识别并标记第一髌骨特征点,基于髌骨特征图进行三维重建,得到三维髌骨模型,并基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点,其中,第二髌骨特征点包括第二上极点、第二下极点、第二外侧边缘点以及第二内侧边缘点,基于三维髌骨模型的结构参数,获取髌骨假体,基于髌骨假体以及第二髌骨特征点,确定三维髌骨模型的目标截骨面。可以获取到三维髌骨模型的目标截骨面,从而可以针对膝关节髌骨提供准确的术前规划方案。

Description

基于深度学习的膝关节髌骨置换三维术前规划方法及系统
相关申请的交叉引用
本申请要求于2022年07月15日提交的申请号为202210836442.4,名称为“基于深度学习的膝关节髌骨置换三维术前规划系统及方法”的中国专利申请的优先权,其通过引用方式全部并入本文。
技术领域
本申请涉及计算机技术领域,尤其涉及一种基于深度学习的膝关节髌骨置换三维术前规划方法及系统。
背景技术
在全膝关节置换术中,对髌骨进行截骨处理的好坏,对患者的影响较大。但是,由于每个人的髌骨形状并不相同,导致无法获取每个人的髌骨相关的信息。目前,采用在二维X线片或尸体解剖标本对髌骨进行实体测量,其测量准确性受到多种因素的影响,无法获取准确的髌骨相关的信息,进而也无法针对膝关节髌骨提供准确的术前规划方案。因此,亟需一种能够获取准确的髌骨信息的方法,进而能够针对膝关节髌骨提供准确的术前规划方案。
发明内容
本申请提供一种基于深度学习的膝关节髌骨置换三维术前规划方法及系统,用以解决现有技术中无法获取准确的髌骨信息,也无法针对膝关节髌骨提供准确的术前规划方案的缺陷,实现获取准确的髌骨的信息,进而可以针对膝关节髌骨提供准确的术前规划方案。
本申请提供一种基于深度学习的膝关节髌骨置换三维术前规划方法,所述方法包括:
获取膝关节的医学图像,并基于所述医学图像进行图像分割,得到髌骨 特征图;
在所述髌骨特征图上识别并标记第一髌骨特征点,其中,所述第一髌骨特征点包括第一上极点、第一下极点、第一外侧边缘点以及第一内侧边缘点;
基于所述髌骨特征图进行三维重建,得到三维髌骨模型,并基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点,其中,所述第二髌骨特征点包括第二上极点、第二下极点、第二外侧边缘点以及第二内侧边缘点;
基于所述三维髌骨模型的结构参数,获取髌骨假体;
基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面。
可选的,所述第二髌骨特征点还包括多个第一目标点;
在将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点之后,所述方法还包括:
基于所述第二上极点与所述第二下极点之间的连接线,以及所述第二外侧边缘点与所述第二内侧边缘点之间的连接线,将所述第一表面划分为四个点候选区域;
从四个所述点候选区域中任取三个所述点候选区域,并从任取的三个所述点候选区域中分别选取一个点作为第一目标点,基于三个所述第一目标点确定第一平面,其中,所述第一平面用于确定所述三维髌骨模型的目标截骨面。
可选的,在基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点之前,所述方法还包括:
基于矫正线段调整所述三维髌骨模型,以使所述三维髌骨模型的第一表面与人体冠状面平行,其中,所述矫正线段由所述第二上极点与所述第二下级点的连接线,以及所述第二外侧边缘点与所述第二内侧边缘点的连接线构成。
可选的,基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面,包括:
基于所述髌骨假体,获取所述髌骨假体的参数信息;
基于所述髌骨假体的参数信息,确定所述三维髌骨模型的截骨厚度值;
将三个所述第一目标点沿远离所述三维髌骨模型的第一表面的方向分别进行投影,得到与三个所述第一目标点分别对应的三个第二目标点,每个所述第一目标点与每个所述第二目标点之间的距离值为所述截骨厚度值;
基于三个所述第二目标点,确定所述三维髌骨模型的目标截骨面;其中,所述目标截骨面与所述第一平面平行。
可选的,基于所述医学图像进行图像分割,得到髌骨特征图,包括:
将所述医学图像输入预先训练完成的分割模型,得到髌骨特征图,其中,所述分割模型为基于样本医学图像训练得到的模型;
在所述髌骨特征图上识别并标记第一髌骨特征点,包括:
将所述髌骨特征图输入预先训练完成的点识别模型,得到已标记第一髌骨特征点的图像,其中,所述点识别模型为基于样本髌骨特征图训练得到的模型。
可选的,所述分割模型包括:深度卷积神经网络、空洞空间卷积池化金字塔网络、第一卷积层、第二卷积层、第三卷积层、第一池化层、第二池化层以及拼接层;
将所述医学图像输入预先训练完成的分割模型,得到髌骨特征图,包括:
将所述医学图像输入所述深度卷积神经网络提取低级图像特征;
将所述低级图像特征输入所述空洞空间卷积池化金字塔网络,提取图像的语义信息,得到高级图像特征;
将所述低级图像特征输入所述第一卷积层,得到当前低级图像特征;
将所述高级图像特征输入所述第二卷积层,并将所述第二卷积层输出的图像特征输入所述第一池化层进行上采样,得到当前高级图像特征;
将所述当前高级图像特征与所述当前低级图像特征输入所述拼接层进行拼接,得到骨骼特征图;
将所述骨骼特征图输入所述第三卷积层,并将所述第三卷积层输出的图像特征输入所述第二池化层进行上采样,得到与所述医学图像尺寸一致的髌骨特征图。
可选的,所述点识别模型包括:第四卷积层、第五卷积层、复制层以及池化层;
将所述髌骨特征图输入预先训练完成的点识别模型,得到已标记髌骨特征点的图像,包括:
将所述髌骨特征图输入所述第四卷积层进行特征提取,得到待复制特征;
将所述待复制特征输入对应的所述复制层进行特征复制,得到复制特征;
将所述待复制特征输入所述第五卷积层进行特征提取,得到待池化特征;
将所述待池化特征与复制特征相加,并输入对应的池化层,得到池化特征,基于所述池化特征,得到热力图,其中,所述热力图中包括像素值能够表征髌骨特征点概率的像素;
从所述热力图中选取最大概率值点作为第一髌骨特征点,并标记所述第一髌骨特征点,其中,所述最大概率值点为像素的像素值最大的点。
本申请还提供一种基于深度学习的膝关节髌骨置换三维术前规划系统,所述系统包括:
第一获取模块,被配置为获取膝关节的医学图像,并基于所述医学图像进行图像分割,得到髌骨特征图;
标记模块,被配置为在所述髌骨特征图上识别并标记第一髌骨特征点,其中,所述第一髌骨特征点包括第一上极点、第一下极点、第一外侧边缘点以及第一内侧边缘点;
投影模块,被配置为基于所述髌骨特征图进行三维重建,得到三维髌骨模型,并基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点,其中,所述第二髌骨特征点包括第二上极点、第二下极点、第二外侧边缘点以及第二内侧边缘点;
第二获取模块,被配置为基于所述三维髌骨模型的结构参数,获取髌骨假体;
第一确定模块,被配置为基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面。
本申请还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上述任一种所述髌骨图像处理方法的步骤。
本申请还提供一种非暂态计算机可读存储介质,其上存储有计算机程序, 该计算机程序被处理器执行时实现如上述任一种所述髌骨图像处理方法的步骤。
本申请还提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述任一种所述髌骨图像处理方法的步骤。
本申请提供的一种基于深度学习的膝关节髌骨置换三维术前规划方法和系统,通过获取膝关节的医学图像,并基于医学图像进行图像分割,得到髌骨特征图,在髌骨特征图上识别并标记第一髌骨特征点,其中,第一髌骨特征点包括第一上极点、第一下极点、第一外侧边缘点以及第一内侧边缘点,基于髌骨特征图进行三维重建,得到三维髌骨模型,并基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点,其中,第二髌骨特征点包括第二上极点、第二下极点、第二外侧边缘点以及第二内侧边缘点,基于三维髌骨模型的结构参数,获取髌骨假体,基于髌骨假体以及第二髌骨特征点,确定三维髌骨模型的目标截骨面。通过这样的方式,可以获取到三维髌骨模型的目标截骨面,从而可以针对膝关节髌骨提供准确的术前规划方案。
附图说明
为了更清楚地说明本申请或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请提供的基于深度学习的膝关节髌骨置换三维术前规划方法的流程示意图之一;
图2是本申请提供的标记第二髌骨特征点的示意图;
图3是本申请提供的在第一表面标记第一目标点的示意图;
图4是本申请提供的在第一表面标记第一目标点的侧视图;
图5是本申请提供的基于深度学习的膝关节髌骨置换三维术前规划方法的流程示意图之二;
图6是本申请提供的分割模型的结构示意图;
图7是本申请提供的点识别模型的结构示意图;
图8是本申请提供的基于深度学习的膝关节髌骨置换三维术前规划系统的结构示意图;
图9是本申请提供的电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请中的附图,对本申请中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了获取准确的髌骨的信息,进而可以针对膝关节髌骨提供准确的术前规划方案,本申请提供了一种基于深度学习的膝关节髌骨置换三维术前规划方法、系统、电子设备、非暂态计算机可读存储介质以及计算机程序产品。下面结合图1描述本申请的一种基于深度学习的膝关节髌骨置换三维术前规划方法。
如图1所示,本申请公开了一种基于深度学习的膝关节髌骨置换三维术前规划方法,所述方法包括:
S101,获取膝关节的医学图像,并基于所述医学图像进行图像分割,得到髌骨特征图。
膝关节医学图像中包括:股骨、胫骨以及髌骨,为了能够获取髌骨相关的信息,在获取到膝关节的医学图像后,可以基于膝关节的医学图像进行图像分割,得到髌骨特征图,其中,髌骨特征图为髌骨靠近胫骨一侧的表面的特征图。
S102,在所述髌骨特征图上识别并标记第一髌骨特征点。
在获取到髌骨特征图后,为了确定目标截骨面,可以在髌骨特征图上识别第一髌骨特征点,并在髌骨特征图上标记第一髌骨特征点,即得到已标记第一髌骨特征点的图像。其中,第一髌骨特征点包括第一上极点、第一下极点、第一外侧边缘点以及第一内侧边缘点。
S103,基于所述髌骨特征图进行三维重建,得到三维髌骨模型,并基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点。
在得到髌骨特征图后,可以基于髌骨特征图进行三维重建,得到三维髌骨模型。在一种实施方式中,可以获取多张髌骨特征图,进而基于多张髌骨特征图堆叠形成三维髌骨模型。
作为一种实施方式,在获取到膝关节的医学图像后,可以采用Vtk(visualization toolkit)对膝关节的医学图像进行三维重建,从而得到三维髌骨模型。这样,便可以获取三维髌骨模型,从而可以了解髌骨的形状。
在得到三维髌骨模型以及已标记第一髌骨特征点的图像后,可以基于第一髌骨特征点的位置信息,将第一髌骨特征点投影到三维髌骨模型的第一表面,得到第二髌骨特征点,其中,三维髌骨模型的第一表面为髌骨靠近胫骨的一侧对应的表面,第二髌骨特征点包括第二上极点、第二下极点、第二外侧边缘点以及第二内侧边缘点。
在一种实施方式中,可以获取第一髌骨特征点在髌骨特征图中的图像坐标。进而基于图像坐标,以及髌骨特征图中髌骨面与三维髌骨模型的第一表面的对应关系,将第一髌骨特征点投影到三维髌骨模型的第一表面,得到第二髌骨特征点。
在另一种实施方式中,可以获取各个第一髌骨特征点的相对位置信息,进而基于相对位置信息,以及髌骨特征图中髌骨面与三维髌骨模型的第一表面的对应关系,将第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点。
例如,如图2所示,在三维髌骨模型A的第一表面上的第二髌骨特征点,即第二上极点201、第二下极点203、第二外侧边缘点202以及第二内侧边缘点204。
S104,基于所述三维髌骨模型的结构参数,获取髌骨假体。
在获取到三维髌骨模型后,可以基于三维髌骨模型的结构参数,获取髌骨假体。在一种实施方式中,在三维髌骨模型的第一表面标记第二髌骨 特征点后,可以基于第二外侧边缘点以及第二内侧边缘点的当前距离,从预设的髌骨假体库中,选取对应的当前假体型号,进而将当前假体型号对应的假体作为三维髌骨模型对应的髌骨假体,其中,预设的髌骨假体库中包括距离与假体型号的对应关系。这样,便可以确定出需要使用的髌骨假体。
S105,基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面。
在获取髌骨假体以及已标记第二髌骨特征点的三维髌骨模型后,可以基于髌骨假体以及第二髌骨特征点,确定三维髌骨模型的目标截骨面,其中,目标截骨面为进行截骨操作后得到的面。
在一种实施方式中,由于根据当前假体型号所确定的髌骨假体的厚度是确定的,因此,可以基于髌骨假体的厚度以及第二髌骨特征点,确定三维髌骨模型的目标截骨面。进而,可以基于目标截骨面进行模拟截骨,这样便可以在手术前,通过三维的方式了解髌骨的形状以及可以准确地了解髌骨的骨骼状态,生成准确的术前规划方案。
作为本申请的一种实施方式,上述第二髌骨特征点还可以包括多个第一目标点,在将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点之后,上述方法还可以包括:
基于所述第二上极点与所述第二下极点之间的连接线,以及所述第二外侧边缘点与所述第二内侧边缘点之间的连接线,将所述第一表面划分为四个点候选区域。
在三维髌骨模型的第一表面上标记第二髌骨特征点后,可以将第二上极点与第二下极点连接,得到第二上极点与第二下极点的连接线,将第二外侧边缘点与第二内侧边缘点连接,得到第二外侧边缘点与第二内侧边缘点的连接线,从而便可以将三维髌骨模型的第一表面划分为四个点候选区域,即第一点候选区域、第二点候选区域、第三点候选区域以及第四点候选区域。
例如,如图3所示,第二上极点201与第二下极点203的连接线,以及第二外侧边缘点202以及第二内侧边缘点204的连接线,可以将三维髌 骨模型A的第一表面划分为四个点候选区域,即第一点候选区域310、第二点候选区域320、第三点候选区域330以及第四点候选区域340。
从四个所述点候选区域中任取三个所述点候选区域,并从任取的三个所述点候选区域中分别选取一个点作为第一目标点,基于三个所述第一目标点确定第一平面,其中,所述第一平面用于确定所述三维髌骨模型的目标截骨面。
在划分出四个点候选区域后,可以从四个点候选区域中,任意选取三个点候选区域,并从任取的三个点候选区域中分别选取一个点,作为第一目标点。
例如,如图3所示,可以从第一点候选区域310、第二点候选区域320、第三点候选区域330以及第四点候选区域340中,选取第一点候选区域310、第二点候选区域320以及第四点候选区域340,并在第一点候选区域310中选取一个点,在第二点候选区域320中选取一个点以及在第四点候选区域340中选取一个点,将选取的点作为第一目标点,即第一目标点305、第一目标点306以及第一目标点307。
图4为标记第一目标点后,三维髌骨模型A的侧视图,图中仅画出了第二上极点201、第二下极点203、第二外侧边缘点202以及第一目标点305。
又例如,可以从第一点候选区域、第二点候选区域、第三点候选区域以及第四点候选区域中,选取第二点候选区域、第三点候选区域以及第四点候选区域,并从第二点候选区域、第三点候选区域以及第四点候选区域中分别选取一个点,作为第一目标点。这都是合理的。
进而,在确定出第一目标点后,可以基于三个第一目标点确定第一平面,其中,第一平面用于确定三维髌骨模型的目标截骨面。
在一种实施方式中,为了得到更加准确的第一平面,从而得到更加准确的目标截骨面,可以选取第二上极点、第二下极点以及第二上极点与第二下极点的连接线与第二外侧边缘点与第二内侧边缘点的连接线的交点作为第一目标点,基于第二上极点、第二下极点以及第二上极点与第二下极点的连接线与第二外侧边缘点与第二内侧边缘点的连接线的交点所确 定的第一平面更加准确,从而可以确定出更加准确的目标截骨面。
作为本申请的一种实施方式,在基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点之前,上述方法还可以包括:
基于矫正线段调整所述三维髌骨模型,以使所述三维髌骨模型的第一表面与人体冠状面平行。
为了能够更加准确地标记第二髌骨特征点,可以在基于第一髌骨特征点的位置信息,将第一髌骨特征点投影到三维髌骨模型的第一表面,得到第二髌骨特征点之前,基于矫正线段调整三维髌骨模型,以使三维髌骨模型的第一表面与人体冠状面平行,由于矫正线段由第二上极点与第二下级点的连接线,以及第二外侧边缘点与第二内侧边缘点的连接线构成,因此,矫正线段能够表征第一表面的方位信息。
在三维髌骨模型的第一表面与人体冠状面平行的情况下,基于第一髌骨特征点的位置信息,将第一髌骨特征点投影到三维髌骨模型的第一表面,从而得到第二髌骨特征点能够更加准确。
作为本申请的一种实施方式,如图5所示,基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面,可以包括:
S501,基于所述髌骨假体,获取所述髌骨假体的参数信息。
在获取髌骨假体后,可以基于髌骨假体,获取该髌骨假体的参数信息,在一种实施方式中,预设的髌骨假体库中存储有髌骨假体与参数信息的对应关系,可以基于当前假体型号,获取髌骨假体,从而获取髌骨假体的参数信息。其中,参数信息可以为髌骨假体的厚度对应的信息。
S502,基于所述髌骨假体的参数信息,确定所述三维髌骨模型的截骨厚度值。
在确定出髌骨假体的参数信息后,可以确定三维髌骨模型的截骨厚度值,在一种实施方式中,可以将髌骨假体的厚度对应的信息作为三维髌骨模型的截骨厚度值。
S503,将三个所述第一目标点沿远离所述三维髌骨模型的第一表面的 方向分别进行投影,得到与三个所述第一目标点分别对应的三个第二目标点,每个所述第一目标点与每个所述第二目标点之间的距离值为所述截骨厚度值。
在确定出三维髌骨模型的截骨厚度值后,可以将三个第一目标点沿远离三维髌骨模型的第一表面的方向分别进行投影,得到与三个第一目标点分别对应的三个第二目标点,其中,第二目标点位于三维髌骨模型上,并且每个第一目标点与每个第二目标点之间的距离值为截骨厚度值。
S504,基于三个所述第二目标点,确定所述三维髌骨模型的目标截骨面。
在确定出三个第二目标点后,可以基于三个第二目标点,确定三维髌骨模型的目标截骨面,其中,目标截骨面与第一平面平行。这样便可以确定出三维髌骨模型的目标截骨面。
作为本申请的一种实施方式,基于所述医学图像进行图像分割,得到髌骨特征图,可以包括:
将所述医学图像输入预先训练完成的分割模型,得到髌骨特征图。
在获取到膝关节的医学图像后,可以将膝关节的医学图像输入预先训练完成的分割模型,分割模型便可以对医学图像进行图像分割,从而输出髌骨特征图,这样,便可以得到髌骨特征图。其中,分割模型为基于样本医学图像训练得到的。
如图6所示,上述分割模型可以包括:深度卷积神经网络601、空洞空间卷积池化金字塔(atrous spatial pyramid pooling,aspp)网络(图中未设置标号)、第一卷积层602、第二卷积层603、第三卷积层606、第一池化层604、第二池化层607以及拼接层605。
其中,深度卷积神经网络601与空洞空间卷积池化金字塔网络以及第一卷积层602连接,空洞空间卷积池化金字塔网络与第二卷积层603连接,第二卷积层603与第一池化层604连接,第一池化层604以及第一卷积层602与拼接层605连接,拼接层605与第三卷积层606连接,第三卷积层606与第二池化层607连接。
空洞空间卷积池化金字塔网络可以由1个1x1卷积608、3个3x3空洞卷积,即空洞卷积609、空洞卷积610和空洞卷积611以及1个全局池化612构成,第一卷积层602和第二卷积层603可以为1x1卷积,第三卷积层606可以为3x3卷积。
其中,深度卷积神经网络601、空洞空间卷积池化金字塔(atrous spatial pyramid pooling,aspp)网络以及第一卷积层602是Encoder过程,第二卷积层603、第三卷积层606、第一池化层604、第二池化层607以及拼接层605为Decoder过程,即特征还原的过程。
将所述医学图像输入预先训练完成的分割模型,得到髌骨特征图,可以包括:
将医学图像输入所述深度卷积神经网络提取低级图像特征(Low Level Features),其中,低级图像特征能够提供图像的细节信息。进而,将深度卷积神经网络输出的低级图像特征分别输入空洞空间卷积池化金字塔网络以及第一卷积层,得到当前低级图像特征。将高级图像特征输入第二卷积层,并将第二卷积层输出的图像特征输入第一池化层进行上采样,得到当前高级图像特征。将当前高级图像特征与当前低级图像特征输入拼接层进行拼接,得到骨骼特征图,将骨骼特征图输入第三卷积层,并将第三卷积层输出的图像特征输入第二池化层进行上采样,得到与医学图像尺寸一致的髌骨特征图。
例如,如图6所示,将低级图像特征输入空洞空间卷积池化金字塔网络提取图像的语义信息,得到高级图像特征616,其中,在空洞空间卷积池化金字塔网络由1个1x1卷积608、3个3x3空洞卷积,即空洞卷积609、空洞卷积610和空洞卷积611以及1个全局池化612的情况下,由于3个3x3空洞卷积的采样率不同,采样率分别为6、12以及18。因此可以对低级图像特征采用不同的采样率的空洞卷积进行并行采样,能够更好的捕捉图像的上下文信息。
将低级图像特征输入第一卷积层602,得到当前低级图像特征613。在第一卷积层602为1x1卷积的情况下,可以减少低级图像特征的通道数,以便后续进行特征拼接。
将高级图像特征616输入第二卷积层603,在第二卷积层603为1x1卷积的情况下,可以减少高级图像特征的通道数,以便后续进行特征拼接。将第二卷积层603输出的图像特征614,即减少通道数后的高级图像特征输入第一池化层604进行上采样,得到当前高级图像特征。
将当前高级图像特征与当前低级图像特征613输入拼接层605进行拼接,得到骨骼特征图615,其中,将当前高级图像特征与当前低级图像特征613进行拼接,能够提升分割边界的准确度。
将骨骼特征图615输入第三卷积层606,并将第三卷积层606输出的图像特征输入第二上池化层607进行上采样,便可以将髌骨的特征还原为与医学图像尺寸一致,从而得到与医学图像尺寸一致的髌骨特征图。
在所述髌骨特征图上识别并标记第一髌骨特征点,可以包括:
将所述髌骨特征图输入预先训练完成的点识别模型,得到已标记第一髌骨特征点的图像。
在获取到髌骨特征图后,为了能够在髌骨特征图上识别并标记第一髌骨特征点,可以将髌骨特征图输入预先训练完成的点识别模型,点识别模型便可以基于髌骨特征图进行特征点识别,从而输出已标记第一髌骨特征点的图像。其中,点识别模型为基于样本髌骨特征图训练得到的模型。
上述点识别模型可以包括:第四卷积层、第五卷积层、复制层以及池化层。如图7所示,第四卷积层包括4个卷积,即第一卷积701、第二卷积702、第三卷积703以及第四卷积704。复制层包括4个复制结构,即第一复制结构715、第二复制结构714、第三复制结构713以及第四复制结构712。第五卷积层包括三个卷积,即第五卷积705、第六卷积706以及第七卷积707,池化层包括四个池化结构,即第一池化结构711、第二池化结构710、第三池化结构709以及第四池化结构708。
其中,第一卷积701、第二卷积702、第三卷积703、第四卷积704、第五卷积705、第六卷积706以及第七卷积707依次连接,第一池化结构711、第二池化结构710、第三池化结构709以及第四池化结构708依次连接,第七卷积707与第四池化结构708连接,第一卷积701与第一复制结构715连接,第二卷积702与第二复制结构714连接,第三卷积703与第 三复制结构713连接,第四卷积704与第四复制结构712连接。
第一卷积701、第一复制结构715以及第一池化结构711存在对应关系,第二卷积702、第二复制结构714以及第二池化结构710存在对应关系,第三卷积703、第三复制结构713以及第三池化结构709存在对应关系,第四卷积704、第四复制结构712以及第四池化结构708存在对应关系。
将所述髌骨特征图输入预先训练完成的点识别模型,得到已标记第一髌骨特征点的图像,可以包括:
将髌骨特征图输入第四卷积层进行特征提取,得到待复制特征。将待复制特征输入对应的复制层进行特征复制,得到复制特征。将待复制特征输入第五卷积层进行特征提取,得到待池化特征。将待池化特征与复制特征相加,并输入对应的池化层,得到池化特征,基于池化特征,得到热力图(heatmap),从热力图中选取最大概率值点作为第一髌骨特征点,并标记第一髌骨特征点,其中,热力图中包括像素值能够表征第一髌骨特征点概率的像素,最大概率值点为像素的像素值最大的点。
例如,如图7所示,将髌骨特征图输入至第四卷积层的第一卷积701、第二卷积702、第三卷积703以及第四卷积704依次进行特征提取,髌骨特征图输入第一卷积701进行特征提取后,可以将第一卷积输出的图像特征输入第二卷积702以及第一复制结构715。第二卷积702对第一卷积701输出的图像特征进行特征提取,可以将第二卷积702输出的图像特征输入第三卷积703以及第二复制结构714。第三卷积703对第二卷积702输出的图像特征进行特征提取,可以将第三卷积703输出的图像特征输入第四卷积704以及第三复制结构713。第四卷积704对第三卷积703输出的图像特征进行特征提取,可以将第四卷积704输出的图像特征输入第五卷积层以及第四复制结构712。
第五卷积层的第五卷积705、第六卷积706以及第七卷积707依次对第四卷积704输出的图像特征进行提取后,与第四复制结构712输出的图像特征相加,并将相加后的图像特征输入第四池化结构708进行上采样。第四池化结构708输出的图像特征与第三复制结构713输出的图像特征相 加,并将相加后的图像特征输入第三池化结构709进行上采样。第三池化结构709输出的图像特征与第二复制结构714输出的图像特征相加,并将相加后的图像特征输入第二池化结构710进行上采样。第二池化结构710输出的图像特征与第一复制结构715输出的图像特征相加,并将相加后的图像特征输入第一池化结构711进行上采样,这样第一池化结构711所输出的池化特征便可以叠加所有的图像特征,保留了各个尺寸的图像信息,进而通过一个1x1卷积可以将基于池化特征,生成包括像素值能够表征第一髌骨特征点概率的像素的热力图,进而可以从热力图中选取像素的像素值最大的点,将最大概率值点作为第一髌骨特征点,并标记第一髌骨特征点。
可见,本申请可以将医学图像输入预先训练完成的分割模型,得到髌骨特征图,将髌骨特征图输入预先训练完成的点识别模型,得到包括髌骨特征点的图像,这样便可以基于医学图像、预先训练完成的分割模型以及预先训练完成的点识别模型,更加方便快速地获取到包含第一髌骨特征点的图像,以便后续获取髌骨的信息。
作为本申请的一种实施方式,可以调整三维髌骨假体内旋或外旋,也可以调整三维髌骨假体前倾或后倾。还可以调整当前三维髌骨假体与股骨以及胫骨的相对位置,例如,可以调整当前三维髌骨假体上移或下移,也可以调整当前三维髌骨假体内移或外移。这都是合理的。这样便可以了解三维髌骨假体的摆放位置。
作为一种实施方式,可以基于三维髌骨假体,对三维髌骨假体进行0.1mm的微调,使三维髌骨假体处于预设范围内,这样便可以了解髌骨的位置。
下面对本申请提供的髌骨图像处理系统进行描述,下文描述的髌骨图像处理系统与上文描述的髌骨图像处理方法可相互对应参照。
如图8所示,本申请公开了一种髌骨图像处理系统,所述系统包括:
第一获取模块810,被配置为获取膝关节的医学图像,并基于所述医学图像进行图像分割,得到髌骨特征图。
标记模块820,被配置为在所述髌骨特征图上识别并标记第一髌骨特 征点。
其中,所述第一髌骨特征点包括第一上极点、第一下极点、第一外侧边缘点以及第一内侧边缘点。
投影模块830,被配置为基于所述髌骨特征图进行三维重建,得到三维髌骨模型,并基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点。
其中,所述第二髌骨特征点包括第二上极点、第二下极点、第二外侧边缘点以及第二内侧边缘点。
第二获取模块840,被配置为基于所述三维髌骨模型的结构参数,获取髌骨假体。
第一确定模块850,被配置为基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面。
作为本申请的一种实施方式,上述第二髌骨特征点还可以包括多个第一目标点。
上述方法还可以包括:
划分模块,被配置为在将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点之后,基于所述第二上极点与所述第二下极点之间的连接线,以及所述第二外侧边缘点与所述第二内侧边缘点之间的连接线,将所述第一表面划分为四个点候选区域。
第二确定模块,被配置为从四个所述点候选区域中任取三个所述点候选区域,并从任取的三个所述点候选区域中分别选取一个点作为第一目标点,基于三个所述第一目标点确定第一平面。
其中,所述第一平面用于确定所述三维髌骨模型的目标截骨面。
作为本申请的一种实施方式,上述方法还可以包括:
调整模块,被配置为在基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点之前,基于矫正线段调整所述三维髌骨模型,以使所述三维髌骨模型的 第一表面与人体冠状面平行。
其中,所述矫正线段由所述第二上极点与所述第二下级点的连接线,以及所述第二外侧边缘点与所述第二内侧边缘点的连接线构成。
作为本申请的一种实施方式,上述第一确定模块850可以包括:
获取单元,被配置为基于所述髌骨假体,获取所述髌骨假体的参数信息。
第一确定单元,被配置为基于所述髌骨假体的参数信息,确定所述三维髌骨模型的截骨厚度值。
投影单元,被配置为将三个所述第一目标点沿远离所述三维髌骨模型的第一表面的方向分别进行投影,得到与三个所述第一目标点分别对应的三个第二目标点,每个所述第一目标点与每个所述第二目标点之间的距离值为所述截骨厚度值;
第二确定单元,被配置为基于三个所述第二目标点,确定所述三维髌骨模型的目标截骨面。
其中,所述目标截骨面与所述第一平面平行。
作为本申请的一种实施方式,上述第一确定模块810可以包括:
第一输入单元,被配置为将所述医学图像输入预先训练完成的分割模型,得到髌骨特征图。
其中,所述分割模型为基于样本医学图像训练得到的模型
上述标记模块820可以包括:
第二输入单元,被配置为将所述髌骨特征图输入预先训练完成的点识别模型,得到已标记第一髌骨特征点的图像。
其中,所述点识别模型为基于样本髌骨特征图训练得到的模型。
作为本申请的一种实施方式,分割模型可以包括:深度卷积神经网络、空洞空间卷积池化金字塔网络、第一卷积层、第二卷积层、第三卷积层、第一池化层、第二池化层以及拼接层;
上述第一输入单元可以包括:
第一输入子单元,被配置为将所述医学图像输入所述深度卷积神经网络提取低级图像特征;
第二输入子单元,被配置为将所述低级图像特征输入所述空洞空间卷积池化金字塔网络,提取图像的语义信息,得到高级图像特征;
第三输入子单元,被配置为将所述低级图像特征输入所述第一卷积层,得到当前低级图像特征;
第四输入子单元,被配置为将所述高级图像特征输入所述第二卷积层,并将所述第二卷积层输出的图像特征输入所述第一池化层进行上采样,得到当前高级图像特征;
第五输入子单元,被配置为将所述当前高级图像特征与所述当前低级图像特征输入所述拼接层进行拼接,得到骨骼特征图;
第六输入子单元,被配置为将所述骨骼特征图输入所述第三卷积层,并将所述第三卷积层输出的图像特征输入所述第二池化层进行上采样,得到与所述医学图像尺寸一致的髌骨特征图。
作为本申请的一种实施方式,上述点识别模型可以包括:第四卷积层、第五卷积层、复制层以及池化层;
上述第二输入单元可以包括:
第七输入子单元,被配置为将所述髌骨特征图输入所述第四卷积层进行特征提取,得到待复制特征;
第八输入子单元,被配置为将所述待复制特征输入对应的所述复制层进行特征复制,得到复制特征;
第九输入子单元,被配置为将所述待复制特征输入所述第五卷积层进行特征提取,得到待池化特征;
第十输入子单元,被配置为将所述待池化特征与复制特征相加,并输入对应的池化层,得到池化特征,基于所述池化特征,得到热力图,其中,所述热力图中包括像素值能够表征髌骨特征点概率的像素;
选取子单元,被配置为从所述热力图中选取最大概率值点作为第一髌 骨特征点,并标记所述第一髌骨特征点。
其中,所述最大概率值点为像素的像素值最大的点。
图9示例了一种电子设备的实体结构示意图,如图9所示,该电子设备可以包括:处理器(processor)910、通信接口(Communications Interface)920、存储器(memory)930和通信总线940,其中,处理器910,通信接口920,存储器930通过通信总线940完成相互间的通信。处理器910可以调用存储器930中的逻辑指令,以执行上述各方法所提供的髌骨图像处理方法。
此外,上述的存储器930中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
另一方面,本申请还提供一种计算机程序产品,所述计算机程序产品包括计算机程序,计算机程序可存储在非暂态计算机可读存储介质上,所述计算机程序被处理器执行时,计算机能够执行上述各方法所提供的基于深度学习的膝关节髌骨置换三维术前规划方法。
又一方面,本申请还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现以执行上述各方法提供的基于深度学习的膝关节髌骨置换三维术前规划方法。
以上所描述的系统实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况 下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (10)

  1. 一种基于深度学习的膝关节髌骨置换三维术前规划方法,所述方法包括:
    获取膝关节的医学图像,并基于所述医学图像进行图像分割,得到髌骨特征图;
    在所述髌骨特征图上识别并标记第一髌骨特征点,其中,所述第一髌骨特征点包括第一上极点、第一下极点、第一外侧边缘点以及第一内侧边缘点;
    基于所述髌骨特征图进行三维重建,得到三维髌骨模型,并基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点,其中,所述第二髌骨特征点包括第二上极点、第二下极点、第二外侧边缘点以及第二内侧边缘点;
    基于所述三维髌骨模型的结构参数,获取髌骨假体;
    基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面。
  2. 根据权利要求1所述的基于深度学习的膝关节髌骨置换三维术前规划方法,其中,所述第二髌骨特征点还包括多个第一目标点;
    在将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点之后,所述方法还包括:
    基于所述第二上极点与所述第二下极点之间的连接线,以及所述第二外侧边缘点与所述第二内侧边缘点之间的连接线,将所述第一表面划分为四个点候选区域;
    从四个所述点候选区域中任取三个所述点候选区域,并从任取的三个所述点候选区域中分别选取一个点作为第一目标点,基于三个所述第一目标点确定第一平面,其中,所述第一平面用于确定所述三维髌骨模型的目标截骨面。
  3. 根据权利要求2所述的基于深度学习的膝关节髌骨置换三维术前规划方法,在基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点之前,所述方法还包括:
    基于矫正线段调整所述三维髌骨模型,以使所述三维髌骨模型的第一表 面与人体冠状面平行,其中,所述矫正线段由所述第二上极点与所述第二下级点的连接线,以及所述第二外侧边缘点与所述第二内侧边缘点的连接线构成。
  4. 根据权利要求2所述的基于深度学习的膝关节髌骨置换三维术前规划方法,其中,基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面,包括:
    基于所述髌骨假体,获取所述髌骨假体的参数信息;
    基于所述髌骨假体的参数信息,确定所述三维髌骨模型的截骨厚度值;
    将三个所述第一目标点沿远离所述三维髌骨模型的第一表面的方向分别进行投影,得到与三个所述第一目标点分别对应的三个第二目标点,每个所述第一目标点与每个所述第二目标点之间的距离值为所述截骨厚度值;
    基于三个所述第二目标点,确定所述三维髌骨模型的目标截骨面;其中,所述目标截骨面与所述第一平面平行。
  5. 根据权利要求1所述的方法,其中,基于所述医学图像进行图像分割,得到髌骨特征图,包括:
    将所述医学图像输入预先训练完成的分割模型,得到髌骨特征图,其中,所述分割模型为基于样本医学图像训练得到的模型;
    在所述髌骨特征图上识别并标记第一髌骨特征点,包括:
    将所述髌骨特征图输入预先训练完成的点识别模型,得到已标记第一髌骨特征点的图像,其中,所述点识别模型为基于样本髌骨特征图训练得到的模型。
  6. 根据权利要求5所述的方法,其中,所述分割模型包括:深度卷积神经网络、空洞空间卷积池化金字塔网络、第一卷积层、第二卷积层、第三卷积层、第一池化层、第二池化层以及拼接层;
    将所述医学图像输入预先训练完成的分割模型,得到髌骨特征图,包括:
    将所述医学图像输入所述深度卷积神经网络提取低级图像特征;
    将所述低级图像特征输入所述空洞空间卷积池化金字塔网络,提取图像的语义信息,得到高级图像特征;
    将所述低级图像特征输入所述第一卷积层,得到当前低级图像特征;
    将所述高级图像特征输入所述第二卷积层,并将所述第二卷积层输出的 图像特征输入所述第一池化层进行上采样,得到当前高级图像特征;
    将所述当前高级图像特征与所述当前低级图像特征输入所述拼接层进行拼接,得到骨骼特征图;
    将所述骨骼特征图输入所述第三卷积层,并将所述第三卷积层输出的图像特征输入所述第二池化层进行上采样,得到与所述医学图像尺寸一致的髌骨特征图。
  7. 根据权利要求5所述的方法,其中,所述点识别模型包括:第四卷积层、第五卷积层、复制层以及池化层;
    将所述髌骨特征图输入预先训练完成的点识别模型,得到已标记髌骨特征点的图像,包括:
    将所述髌骨特征图输入所述第四卷积层进行特征提取,得到待复制特征;
    将所述待复制特征输入对应的所述复制层进行特征复制,得到复制特征;
    将所述待复制特征输入所述第五卷积层进行特征提取,得到待池化特征;
    将所述待池化特征与复制特征相加,并输入对应的池化层,得到池化特征,基于所述池化特征,得到热力图,其中,所述热力图中包括像素值能够表征髌骨特征点概率的像素;
    从所述热力图中选取最大概率值点作为第一髌骨特征点,并标记所述第一髌骨特征点,其中,所述最大概率值点为像素的像素值最大的点。
  8. 一种基于深度学习的膝关节髌骨置换三维术前规划系统,所述系统包括:
    第一获取模块,被配置为获取膝关节的医学图像,并基于所述医学图像进行图像分割,得到髌骨特征图;
    标记模块,被配置为在所述髌骨特征图上识别并标记第一髌骨特征点,其中,所述第一髌骨特征点包括第一上极点、第一下极点、第一外侧边缘点以及第一内侧边缘点;
    投影模块,被配置为基于所述髌骨特征图进行三维重建,得到三维髌骨模型,并基于所述第一髌骨特征点的位置信息,将所述第一髌骨特征点投影到所述三维髌骨模型的第一表面,得到第二髌骨特征点,其中,所述第二髌骨特征点包括第二上极点、第二下极点、第二外侧边缘点以及第二内侧边缘点;
    第二获取模块,被配置为基于所述三维髌骨模型的结构参数,获取髌骨假体;
    第一确定模块,被配置为基于所述髌骨假体以及所述第二髌骨特征点,确定所述三维髌骨模型的目标截骨面。
  9. 一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如权利要求1至7任一项所述髌骨图像处理方法的步骤。
  10. 一种非暂态计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述髌骨图像处理方法的步骤。
PCT/CN2023/082710 2022-07-15 2023-03-21 基于深度学习的膝关节髌骨置换三维术前规划方法及系统 WO2024011943A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210836442.4A CN115393272B (zh) 2022-07-15 2022-07-15 基于深度学习的膝关节髌骨置换三维术前规划系统及方法
CN202210836442.4 2022-07-15

Publications (1)

Publication Number Publication Date
WO2024011943A1 true WO2024011943A1 (zh) 2024-01-18

Family

ID=84115994

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/082710 WO2024011943A1 (zh) 2022-07-15 2023-03-21 基于深度学习的膝关节髌骨置换三维术前规划方法及系统

Country Status (2)

Country Link
CN (1) CN115393272B (zh)
WO (1) WO2024011943A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393272B (zh) * 2022-07-15 2023-04-18 北京长木谷医疗科技有限公司 基于深度学习的膝关节髌骨置换三维术前规划系统及方法
CN116071372B (zh) * 2022-12-30 2024-03-19 北京长木谷医疗科技股份有限公司 膝关节分割方法、装置、电子设备及存储介质
CN116071386B (zh) * 2023-01-09 2023-10-03 安徽爱朋科技有限公司 一种关节疾病的医学影像的动态分割方法
CN116898574B (zh) * 2023-09-06 2024-01-09 北京长木谷医疗科技股份有限公司 人工智能膝关节韧带重建术的术前规划方法、系统及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111166474A (zh) * 2019-04-23 2020-05-19 艾瑞迈迪科技石家庄有限公司 一种关节置换手术术前的辅助诊查方法和装置
CN112132834A (zh) * 2020-09-18 2020-12-25 中山大学 一种心室图像分割方法、系统、装置及存储介质
CN113017829A (zh) * 2020-08-22 2021-06-25 张逸凌 一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备
CN113919020A (zh) * 2021-09-24 2022-01-11 北京长木谷医疗科技有限公司 单髁置换用导板设计方法及相关设备
WO2022037696A1 (zh) * 2020-08-21 2022-02-24 张逸凌 基于深度学习的骨骼分割方法和系统
CN115393272A (zh) * 2022-07-15 2022-11-25 北京长木谷医疗科技有限公司 基于深度学习的膝关节髌骨置换三维术前规划系统及方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646229B2 (en) * 2012-09-28 2017-05-09 Siemens Medical Solutions Usa, Inc. Method and system for bone segmentation and landmark detection for joint replacement surgery
CN109833121B (zh) * 2019-02-14 2021-08-10 重庆熙科医疗科技有限公司 一种髌骨假体的设计方法
CN112957126B (zh) * 2021-02-10 2022-02-08 北京长木谷医疗科技有限公司 基于深度学习的单髁置换术前规划方法和相关设备
CN112971981B (zh) * 2021-03-02 2022-02-08 北京长木谷医疗科技有限公司 基于深度学习的全髋关节图像处理方法和设备
CN114041878A (zh) * 2021-10-19 2022-02-15 山东建筑大学 骨关节置换手术机器人的ct图像的三维重建方法及系统
CN114419618B (zh) * 2022-01-27 2024-02-02 北京长木谷医疗科技股份有限公司 基于深度学习的全髋关节置换术前规划系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111166474A (zh) * 2019-04-23 2020-05-19 艾瑞迈迪科技石家庄有限公司 一种关节置换手术术前的辅助诊查方法和装置
WO2022037696A1 (zh) * 2020-08-21 2022-02-24 张逸凌 基于深度学习的骨骼分割方法和系统
CN113017829A (zh) * 2020-08-22 2021-06-25 张逸凌 一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备
CN112132834A (zh) * 2020-09-18 2020-12-25 中山大学 一种心室图像分割方法、系统、装置及存储介质
CN113919020A (zh) * 2021-09-24 2022-01-11 北京长木谷医疗科技有限公司 单髁置换用导板设计方法及相关设备
CN115393272A (zh) * 2022-07-15 2022-11-25 北京长木谷医疗科技有限公司 基于深度学习的膝关节髌骨置换三维术前规划系统及方法

Also Published As

Publication number Publication date
CN115393272A (zh) 2022-11-25
CN115393272B (zh) 2023-04-18

Similar Documents

Publication Publication Date Title
WO2024011943A1 (zh) 基于深度学习的膝关节髌骨置换三维术前规划方法及系统
CN110189352B (zh) 一种基于口腔cbct图像的牙根提取方法
WO2023142956A1 (zh) 基于深度学习的全髋关节置换术前规划系统
US20210012492A1 (en) Systems and methods for obtaining 3-d images from x-ray information for deformed elongate bones
WO2023078309A1 (zh) 目标特征点提取方法、装置、计算机设备和存储介质
KR101744079B1 (ko) 치과 시술 시뮬레이션을 위한 얼굴모델 생성 방법
WO2016116946A2 (en) A system and method for obtaining 3-dimensional images using conventional 2-dimensional x-ray images
CN112509119B (zh) 针对颞骨的空间数据处理及定位方法、装置及电子设备
US11348216B2 (en) Technologies for determining the accuracy of three-dimensional models for use in an orthopaedic surgical procedure
KR102211688B1 (ko) 무릎 자기공명 영상에서의 반월상 연골 분할 방법 및 장치
US11883220B2 (en) Technologies for determining the spatial orientation of input imagery for use in an orthopaedic surgical procedure
WO2023056877A1 (zh) 膝关节股骨力线确定方法和装置、电子设备、存储介质
CN113077498A (zh) 骨盆配准方法、骨盆配准装置和骨盆配准系统
CN114642444A (zh) 口腔种植精度评价方法、系统和终端设备
EP3972513B1 (en) Automated planning of shoulder stability enhancement surgeries
CN112258494B (zh) 一种病灶位置确定方法、装置及电子设备
CN113077499A (zh) 骨盆配准方法、骨盆配准装置和骨盆配准系统
WO2023241032A1 (zh) 基于深度学习的智能识别骨关节炎的方法及系统
CN109741360B (zh) 一种骨关节分割方法、装置、终端及可读介质
CN117422721B (zh) 一种基于下肢ct影像的智能标注方法
CN114469341B (zh) 基于髋关节置换的髋臼注册方法
KR102516945B1 (ko) 인공지능 기반의 두경부 랜드마크 검출 방법 및 장치
CN117672472B (zh) 一种基于腹部ct影像的对称标注方法
US20240233103A9 (en) Technologies for determining the accuracy of three-dimensional models for use in an orthopaedic surgical procedure
US20240000512A1 (en) Calculating range of motion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23838445

Country of ref document: EP

Kind code of ref document: A1