WO2022170768A1 - 单髁关节图像的处理方法、装置、设备和存储介质 - Google Patents

单髁关节图像的处理方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2022170768A1
WO2022170768A1 PCT/CN2021/120586 CN2021120586W WO2022170768A1 WO 2022170768 A1 WO2022170768 A1 WO 2022170768A1 CN 2021120586 W CN2021120586 W CN 2021120586W WO 2022170768 A1 WO2022170768 A1 WO 2022170768A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
segmentation
unicondylar
dimensional
image data
Prior art date
Application number
PCT/CN2021/120586
Other languages
English (en)
French (fr)
Inventor
张逸凌
刘星宇
Original Assignee
北京长木谷医疗科技有限公司
张逸凌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京长木谷医疗科技有限公司, 张逸凌 filed Critical 北京长木谷医疗科技有限公司
Publication of WO2022170768A1 publication Critical patent/WO2022170768A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • A61B2034/104Modelling the effect of the tool, e.g. the effect of an implanted prosthesis or for predicting the effect of ablation or burring
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides

Definitions

  • the invention relates to the technical field of artificial intelligence, and in particular, to a method, device, equipment and storage medium for processing images of a unicondylar joint.
  • Unicompartmental knee arthroplasty (Unicompartmental Knee Arthroplasty, UKA for short) is an essential step in the stepwise treatment of knee osteoarthritis.
  • the current knee UKA surgery still relies on the experience of the surgeon for the lower limb alignment and soft tissue balance.
  • the osteotomy parameters (such as osteotomy angle, osteotomy volume) and even prosthesis size in knee unicondylar replacement
  • the choice of the instrument is grasped by the surgeon's "visual method". Individual differences of patients and the surgeon's proficiency in the instruments may affect the surgical effect.
  • the present invention provides a method, device, electronic equipment and storage medium for processing unicondylar joint images, which are used to overcome the defects caused by the individual differences of patients and the subjective experience of doctors to artificial unicondylar replacement surgery, and realize the artificial intelligence-based unicondylar replacement operation. Condylar replacement prosthesis matching.
  • the present invention provides a method for processing a unicondylar joint image.
  • the unicondylar prosthesis is matched based on deep learning of image data.
  • the method includes the following steps: acquiring knee joint image data, and obtaining a three-dimensional bone based on the knee joint image data.
  • the three-dimensional bone image includes a three-dimensional femur image and a three-dimensional tibia image; identify and display the key points and key axes of the three-dimensional bone image; and calculate the femur and the tibia respectively according to the key points and the key axes based on the key points, the key axis, the size parameters and the angle parameters in the pre-stored prosthesis model database to match the unicondylar prosthesis, and match the unicondylar prosthesis The effect is visualized.
  • the obtaining of a three-dimensional skeleton image based on the knee joint image data includes the following steps: acquiring image data of the knee joint, and performing image segmentation on the image data based on a deep learning algorithm; The three-dimensional reconstruction of the image data is performed to obtain the three-dimensional femur image and the three-dimensional tibia image, and visualize them.
  • the three-dimensional reconstruction based on the segmented image data to obtain the three-dimensional femur image and the three-dimensional tibia image, and after visual display further includes: judging the image for the knee joint Whether the segmentation of the data needs to be optimized, if the segmentation of the image data of the knee joint needs to be optimized, an input segmentation adjustment instruction is received, and the segmentation of the image data of the knee joint is adjusted.
  • the image segmentation of the image data based on the deep learning algorithm is as follows: image segmentation is performed on the image data based on a segmentation neural network model; The associated parameters are determined by training and testing based on the image dataset in the lower extremity medical image database; wherein, the image dataset in the lower extremity medical image database is the lower extremity medical image dataset marked with the femur, tibia, fibula and patella regions, The image data set is divided into training set and test set.
  • the segmentation neural network is 2D Dense-Unet, FCN, SegNet, Unet, 3D-Unet, Mask-RCNN, hole convolution, ENet, CRFasRNN, PSPNet, ParseNet, RefineNet, At least one of ReSeg, LSTM-CF, DeepMask, DeepLabV1, DeepLabV2, DeepLabV3.
  • the key points are key anatomical sites;
  • the key anatomical sites are identified through HRNet, MTCNN, locnet, Pyramid Residual Module, Densenet, hourglass, resnet, SegNet, Unet, R-CNN, Fast R-CNN, Faster R-CNN, R-FCN, SSD At least one neural network model implementation.
  • the key points of the three-dimensional bone image include one or more of the following combinations: the lowest point of the distal end of the femur, the lowest point of the tibial plateau, and the medial and lateral edges of the tibial plateau; the three-dimensional
  • the key axes of the skeletal image include one or more of the following combinations: femoral mechanical axis, femoral anatomical axis, tibial mechanical axis, tibial anatomical axis, and a line connecting the medial border of the tibial tubercle and the midpoint of the posterior cruciate ligament insertion ;
  • Described size parameter includes following one or more combinations: femoral anteroposterior diameter, femoral condyle inner and outer diameter, tibial plateau anteroposterior diameter, tibial plateau posterior inclination angle; Described angle parameter includes following one or more combinations: tibial
  • the associated parameters of the segmentation neural network model are determined by training and testing based on the image data set in the lower extremity medical image database, including:
  • the segmentation neural network model is trained;
  • the present invention also provides a device for processing images of a unicondylar joint, the device comprising: an acquisition module, a recognition and calculation module, and a prosthesis matching module.
  • the acquisition module is configured to acquire knee joint image data, and obtain a three-dimensional bone image based on the knee joint image data; wherein, the three-dimensional bone image includes a three-dimensional femur image and a three-dimensional tibia image;
  • the identification and calculation module is configured to Identifying key points and key axes of the three-dimensional bone image, and displaying; and calculating size parameters and angle parameters of the femur and tibia respectively according to the key points and the key axes;
  • the prosthesis matching module is configured to be based on the The key point, the key axis, the size parameter and the angle parameter are matched to the unicondylar prosthesis in the database of the pre-stored prosthesis model, and the matching effect of the unicondylar prosthesis is visualized.
  • the acquisition module includes an image segmentation unit and a three-dimensional reconstruction unit; the image segmentation unit is configured to acquire image data of the knee joint, and analyze the image data based on a deep learning algorithm. Perform image segmentation; the three-dimensional reconstruction unit is configured to perform three-dimensional reconstruction based on the segmented image data, obtain a three-dimensional femur image and a three-dimensional tibia image, and visualize them.
  • the acquisition module further includes a segmentation adjustment unit; the segmentation adjustment unit is configured to determine whether the segmentation of the image data for the knee joint needs to be optimized, if the segmentation adjustment unit is for the knee joint If the segmentation of the image data needs to be optimized, the input segmentation adjustment instruction is received, and the segmentation of the image data of the knee joint is adjusted.
  • the image segmentation unit when the image segmentation unit performs image segmentation on the image data based on a deep learning algorithm, the image segmentation unit may include: performing image segmentation on the image data based on a segmentation neural network model And, the associated parameter of the described segmentation neural network model is determined by training and testing based on the image data set in the lower extremity medical image database; Wherein, the image data set in the lower extremity medical image database is annotated femur, tibia, fibula and a lower extremity medical image dataset of the patella region, the image dataset is divided into a training set and a test set.
  • the segmentation neural network is 2D Dense-Unet, FCN, SegNet, Unet, 3D-Unet, Mask-RCNN, Hollow Convolution, ENet, CRFasRNN, PSPNet, ParseNet , at least one of RefineNet, ReSeg, LSTM-CF, DeepMask, DeepLabV1, DeepLabV2, DeepLabV3.
  • the key points are key anatomical sites; and, the key anatomical sites are identified through HRNet, MTCNN, locnet, Pyramid Residual Module, Densenet, hourglass, resnet , SegNet, Unet, R-CNN, Fast R-CNN, Faster R-CNN, R-FCN, SSD at least one neural network model implementation.
  • the key points of the three-dimensional bone image include one or more of the following combinations: the lowest point of the distal end of the femur, the lowest point of the tibial plateau, and the inner and outer edges of the tibial plateau;
  • the key axes of the three-dimensional bone image include one or more of the following combinations: femoral mechanical axis, femoral anatomical axis, tibial mechanical axis, tibial anatomical axis, and the medial border of the tibial tubercle and the midpoint of the posterior cruciate ligament insertion
  • the size parameters include one or more of the following combinations: anteroposterior diameter of the femur, inner and outer diameter of the femoral condyle, anterior and posterior diameters of the tibial plateau, and posterior inclination angle of the tibial plateau;
  • the angle parameters include one or more of the following combinations: The posterior
  • the associated parameters of the segmentation neural network model are determined by training and testing based on the image data set in the lower extremity medical image database, including:
  • the segmentation neural network model is trained;
  • the evaluation index of the segmentation neural network model in the verification set reaches the preset model evaluation index, the training is stopped, and the associated parameters of the segmentation neural network model are obtained;
  • the present invention also provides an electronic device, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements any one of the above-mentioned unicondylar joints when executing the program The steps of the image processing method.
  • the present invention also provides a non-transitory computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of any one of the above-mentioned methods for processing images of a unicondylar joint.
  • the present invention provides a method, device, electronic device and storage medium for processing unicondylar joint images, which can identify the key points and key points of the femur and tibia in the image through the three-dimensional femoral image and the three-dimensional tibia image generated based on the knee joint image data.
  • the axis, and the size parameters and angle parameters of the femur and the tibia are calculated according to the key points and the key axis, and the unicondylar prosthesis is matched according to the key points, the key axis, the respective size parameters and the angle parameters of the femur and the tibia, and the single condyle is matched.
  • Visual display of condylar prosthesis matching effect can identify the key points and key points of the femur and tibia in the image through the three-dimensional femoral image and the three-dimensional tibia image generated based on the knee joint image data.
  • the invention overcomes the defects caused by the individual differences of patients and the subjective experience of doctors to the artificial unicondylar replacement operation, realizes the matching of unicondylar replacement prostheses based on artificial intelligence, provides accurate and powerful technical support and guarantee for doctors, and makes unicondylar replacement surgery possible. Replacement surgery is more accurate and safer, and promotes the development of surgery in the direction of intelligence and precision.
  • Fig. 1 is one of the schematic flow charts of the processing method of unicondylar joint image provided by the present invention
  • FIG. 2 is a schematic flowchart of obtaining a three-dimensional skeleton image based on knee joint image data in a method for processing a unicondylar joint image provided by the present invention
  • Fig. 3 is the working principle diagram of converting knee joint image data into three-dimensional skeleton image based on segmentation neural network and three-dimensional reconstruction in the processing method of unicondylar joint image of the present invention
  • FIG. 4 is a schematic diagram of a three-dimensional skeleton image generated based on three-dimensional reconstruction in the method for processing a unicondylar joint image of the present invention
  • FIG. 5 is a schematic diagram of key point recognition in the method for processing a unicondylar joint image of the present invention
  • FIG. 6 is an effect diagram of placing a prosthesis in the method for processing a unicondylar joint image of the present invention
  • Fig. 7 is the effect diagram of simulating postoperative preview in the processing method of the unicondylar joint image of the present invention.
  • FIG. 8 is the second schematic flow chart of the method for processing a unicondylar joint image provided by the present invention.
  • FIG. 9 is a schematic structural diagram of a device for processing a unicondylar joint image provided by the present invention.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by the present invention.
  • Fig. 1 is one of the schematic flow charts of the processing method of the unicondylar joint image provided by the present invention, and the method comprises the following steps:
  • Step 110 Obtain knee joint image data, and obtain a three-dimensional skeleton image based on the knee joint image data.
  • the three-dimensional bone image includes a three-dimensional femur image and a three-dimensional tibia image.
  • Step 120 Identify and display key points and key axes of the three-dimensional bone image; and calculate size parameters and angle parameters of the femur and tibia respectively according to the key points and the key axes.
  • Step 130 based on the key point, the key axis, the size parameter and the angle parameter, perform unicondylar prosthesis matching in a database of pre-stored prosthesis models, and visualize the matching effect of the unicondylar prosthesis.
  • This embodiment identifies the key points and key axes of the femur and the tibia in the image through the three-dimensional femur image and the three-dimensional tibia image generated based on the knee joint image data, and calculates the size parameters and the tibia respectively according to the key points and the key axes.
  • the angle parameters are used to match the unicondylar prosthesis through the key points, key axes, respective size parameters and angle parameters of the femur and the tibia, and visualize the matching effect of the unicondylar prosthesis.
  • the invention overcomes the defects caused by the individual differences of patients and the subjective experience of doctors to the artificial unicondylar replacement operation, realizes the matching of unicondylar replacement prostheses based on artificial intelligence, provides accurate and powerful technical support and guarantee for doctors, and makes unicondylar replacement surgery possible. Replacement surgery is more accurate and safer, and promotes the development of surgery in the direction of intelligence and precision.
  • Step 110 Obtain knee joint image data, and obtain a three-dimensional skeleton image based on the knee joint image data.
  • the knee joint image data in this step may be CT (Computed Tomography, computer tomography) image data, or may be Magnetic Resonance Imaging (MRI) image data.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • the data format can be an existing format, such as dicom format.
  • the knee joint image data can be converted into a three-dimensional femur image and a three-dimensional tibia image by means of a deep learning algorithm in artificial intelligence.
  • a deep learning algorithm in artificial intelligence can be:
  • Fig. 2 is a schematic flowchart of obtaining a three-dimensional skeleton image based on knee joint image data in the unicondylar joint image processing method provided by the present invention, including the following steps:
  • Step 1101 acquiring image data of the knee joint.
  • Step 1102 Perform image segmentation on the image data based on a deep learning algorithm.
  • Artificial Intelligence is a new technical science that studies and develops theories, methods, technologies and application systems for simulating, extending and expanding human intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can respond in a similar way to human intelligence. Research in this field includes robotics, language recognition, image recognition, Natural language processing and expert systems, etc. Artificial intelligence can simulate the information process of human consciousness and thinking.
  • Deep Learning is a new research direction in the field of Machine Learning (ML), which is introduced into Machine Learning to make it closer to its original goal - Artificial Intelligence. Deep learning is to learn the inherent laws and representation levels of sample data, and the information obtained during these learning processes is of great help to the interpretation of data such as text, images, and sounds. Its ultimate goal is to enable machines to have the ability to analyze and learn like humans, and to recognize data such as words, images, and sounds.
  • ML Machine Learning
  • the deep learning algorithm is a segmentation neural network model, that is, image segmentation is performed on image data based on the segmentation neural network model.
  • the associated parameters of the segmentation neural network model are determined by training and testing based on image datasets in the lower extremity medical image database.
  • the image dataset in the lower extremity medical image database is a lower extremity medical image dataset marked with femur, tibia, fibula and patella regions, and the image dataset is divided into a training set and a test set;
  • the image data is converted into a picture in the first format and saved, and the marked data is converted into a picture in the second format and saved.
  • FIG. 3 there is shown a working principle diagram of converting knee joint image data into three-dimensional bone images based on segmentation neural network and three-dimensional reconstruction in the unicondylar joint image processing method of the present invention.
  • the input information of the segmentation neural network model is knee joint image data, such as knee joint image data A1 shown in FIG. 3, knee joint image data A2, knee joint image data A3, ..., knee joint image data An-1, and, Knee joint image data An.
  • the output end of the segmentation neural network is connected to the input end of the 3D reconstruction module 3, and through 3D reconstruction, 3D bone image data is generated, as described above, including 3D femur image data and 3D tibia image data.
  • the segmentation neural network may include: 2D Dense-Unet, FCN, SegNet, Unet, 3D-Unet, Mask-RCNN, Atrous Convolution, ENet, CRFasRNN, PSPNet, ParseNet, RefineNet, ReSeg, LSTM-CF, At least one of DeepMask, DeepLabV1, DeepLabV2, and DeepLabV3.
  • the associated parameters of the segmentation neural network are determined by training and testing based on image data in a pre-stored lower extremity medical image database.
  • a dataset of CT medical images of patients with knee disease was obtained, and the femur, tibia, fibula, and patella regions were manually annotated and used as our database. According to the ratio of 7:3, it is divided into training set and test set; the two-dimensional cross-sectional DICOM data is converted into JPG format images, the annotation files are converted into png format images, and saved as the input of the neural network.
  • 2D Dense-Unet introduces the denseblock structure on the basis of the Unet model, which makes the segmentation result more accurate, and the segmentation accuracy is greatly improved compared to the traditional segmentation method.
  • the structure of Unet contains two bright spots, namely U-shaped structure and skip-connection.
  • the downsampling (encoder) and upsampling (decoder) operations in Unet restore the high-level semantic feature map obtained by downsampling to the resolution of the original image.
  • Unet has performed multiple upsampling, and used skip connection in the same stage, instead of directly supervising and loss backpropagation on high-level semantic features, which ensures the final recovery.
  • the feature map fuses more underlying image features, and also enables the fusion of features of different scales, so that multi-scale prediction and super-resolution prediction can be performed.
  • Multiple upsampling also makes the segmentation map recover information such as edges more refined.
  • DenseNet has very good anti-overfitting performance, especially suitable for applications where training data is relatively scarce. There is a relatively intuitive explanation for the reason why DenseNet resists overfitting: the features extracted by each layer of the neural network are equivalent to a nonlinear transformation of the input data, and as the depth increases, the complexity of the transformation also gradually increases ( composition of more nonlinear functions). Compared with the general neural network classifier, which directly depends on the features of the last layer of the network (the highest complexity), DenseNet can comprehensively utilize the features with low complexity in shallow layers, so it is easier to obtain a smooth and better generalization performance. decision function.
  • each sub-module of UNet is replaced with a dense connection, that is, the dense block is introduced into Unet. Due to the combination of the advantages of the two, the segmentation effect is better and the accuracy is higher. .
  • the input of the hip joint flesh segmentation/femur segmentation network is the original data to be segmented and the corresponding bone/femur pixel-level annotation data marked by the doctor, that is, the label corresponding to the image.
  • the original data and corresponding labels of the training set are sent to the network in turn to train the network.
  • the custom model evaluation indicators such as IOU (intersection ratio between model learning results and real labels), precision (precision), recall (recall rate), F-measure (F value) and other indicators
  • Step 1103 Perform three-dimensional reconstruction based on the segmented image data to obtain the three-dimensional femur image and the three-dimensional tibia image.
  • 3D reconstruction refers to the establishment of a mathematical model suitable for computer representation and processing of three-dimensional objects. It is the basis for processing, operating and analyzing its properties in a computer environment. Technology.
  • Step 1104 visually displaying the three-dimensional reconstructed three-dimensional femur image and three-dimensional tibia image.
  • FIG. 4 shows a three-dimensional bone image generated based on three-dimensional reconstruction in the method for processing a unicondylar joint image of the present invention.
  • the left area 4a shows two-dimensional views of the bone in transverse, sagittal and coronal planes
  • the right area 4b shows the three-dimensional reconstructed image of the bone.
  • Step 1105 determine whether the image segmentation as the basis for generating the 3D skeleton image needs to be optimized, if the image segmentation as the basis for generating the 3D skeleton image needs to be optimized, go to step 1106; For optimization, step 1107 is executed.
  • step 1102 it can be determined whether the segmentation of the whole knee image data in step 1102 is reasonable according to the visualized result in FIG. 4 . Whether it is reasonable or not can be determined by manual inspection.
  • Cross-sectional CT, sagittal CT, coronal CT images and three-dimensional bone images can realize three-axis linkage, and can be observed in two-dimensional and three-dimensional at the same time.
  • Step 1106 Receive the input division adjustment instruction, and return to step 1102.
  • Step 1107 End the 3D skeleton image generation operation.
  • Step 120 will be described below.
  • Step 120 identifying key points and key axes of the three-dimensional bone image, and displaying them; and calculating the size parameters and angle parameters of the femur and the tibia respectively according to the key points and the key axes.
  • the identification of key points and key axes from a three-dimensional skeleton image such as FIG. 4 can be implemented using an artificial neural network model.
  • keypoint recognition can be performed by at least one of HRNet, MTCNN, locnet, Pyramid Residual Module, Densenet, hourglass, resnet, SegNet, Unet, R-CNN, Fast R-CNN, Faster R-CNN, R-FCN, SSD A neural network model implementation.
  • HRNet neural network to identify keypoints, most methods obtain low-resolution feature maps from high-resolution input through a concatenated high-to-low, and then recover high-resolution feature maps from low-resolution feature maps. .
  • the network used maintains high-resolution feature maps throughout.
  • the backbone part mainly adopts high-to-low and low-to-high frameworks and uses multi-scale fusion and intermediate supervision to enhance the information as much as possible.
  • the high-to-low process aims to generate low-resolution but higher-level features
  • the low-to-high process aims to produce high-resolution features, both of which may be repeated multiple times to improve performance. Therefore, the heatmap predicted by HRNet is more accurate.
  • the high-resolution feature pyramid in HRNet starts from 1/4 resolution and obtains higher-resolution features through transposed convolution.
  • Multi-resolution supervision is used to allow features from different layers to learn information at different scales.
  • Multi-resolution fusion is also used to uniformly put heat maps of different resolutions into the original image size and fuse them together to obtain a scale-sensitive feature.
  • the neural network needs to Converting spatial positions to coordinates is a difficult training method to learn, so these points are generated into Gaussian maps and supervised by heatmap, that is, the output of the network is a feature map of the same size as the input, at the position of the detection point is 1, and other positions are 0.
  • the detection of multiple points outputs feature maps of multiple channels.
  • the network uses Adam optimization, the learning rate is 1e-5, the batch_size is 4, and the loss function uses L2 regularization. According to the change of the loss function during the training process, the size of the training batch is adjusted, and the coordinate value of the key point is finally obtained.
  • the key points in this embodiment may be key anatomical sites.
  • critical anatomical sites may include critical points and critical axes.
  • FIG. 5 is a schematic diagram of key point recognition in the method for processing an image of a unicondylar joint according to the present invention.
  • the black dots marked in the 3D skeleton image in the middle of Figure 5 are the key points.
  • the left area 5a shows two-dimensional views of the bone in transverse, sagittal and coronal planes, and the right area 5b shows the key points included in the three-dimensional reconstructed image of the bone.
  • the key points of the three-dimensional bone image may include one or more of the following combinations: the nadir of the distal femur, the nadir of the tibial plateau, and the medial and lateral borders of the tibial plateau.
  • the key axes of the three-dimensional bone image may include one or more of the following combinations: femoral mechanical axis, femoral anatomical axis, tibial mechanical axis, tibial anatomical axis, and the medial border of the tibial tuberosity and the posterior intersection The line connecting the midpoints of the ligament insertions. Manually check the identification of key points, and adjust the key points with inaccurate identification positions.
  • the size parameters and angle parameters of the femur and the tibia are respectively calculated according to the key points and the key axis.
  • the size parameters include one or more of the following combinations: anteroposterior diameter of femur, inner and outer diameter of femoral condyle, anterior and posterior diameter of tibial plateau, and posterior inclination angle of tibial plateau.
  • the angle parameters include one or more combinations of the following: the posterior inclination angle of the tibial plateau, the included angle between the femoral mechanical axis and the tibial mechanical axis, and the included angle between the femoral anatomical axis and the tibial anatomical axis.
  • Step 130 will be described below.
  • Step 130 perform unicondylar prosthesis matching in a database of pre-stored prosthesis models based on key points, key axes, the size parameters and the angle parameters, and visualize the matching effect of the unicondylar prosthesis.
  • This step is explained from three aspects.
  • the database storing the prosthesis model is data pre-stored in the system. It mainly stores unicondylar prosthesis models for unicondylar replacement surgery. Unicondylar prosthesis models vary in size and size.
  • CT scans of normal human joints can be performed, and digital technology can be used to measure the joint shape and the shape after osteotomy, and then a digital joint model database can be established to provide morphological data for the design of a unicondylar prosthesis model.
  • the key points, key axes, size parameters and angle parameters based on the three-dimensional bone image of the patient are determined.
  • the system searches for matching objects in the database of pre-stored prosthetic models and makes intelligent recommendations.
  • the model, placement position and placement angle of the unicondylar prosthesis model are given. Restoring the physiological posterior inclination of the patient's tibial plateau and correcting the patient's joint deformity.
  • FIG. 6 is an effect diagram of the matching prosthesis in the image processing method of the unicondylar joint of the present invention.
  • the figure shows a unicondylar prosthesis 6a, a three-dimensional reconstruction of the femur 6b, a three-dimensional reconstruction of the tibia 6c, and a three-dimensional reconstruction of the fibula 6d.
  • the visual scenario it can also include manual inspection of the prosthesis model and placement position, and fine-tuning can be performed when there is a deviation in the placement position and angle.
  • the steps of simulating the osteotomy according to the osteotomy parameters, and then matching the unicondylar prosthesis model through the visualization platform may also be included.
  • FIG. 7 is an effect diagram of a simulated postoperative preview in the unicondylar joint image processing method of the present invention.
  • Fig. 8 is a second schematic flow chart of a method for processing images of a unicondylar joint provided by the present invention, comprising the following steps:
  • Step 801 Select CT image data of the knee joint.
  • Step 802 Perform image segmentation on the image data based on a deep learning algorithm.
  • Step 803 Perform three-dimensional reconstruction based on the segmented image data to obtain the three-dimensional femur image and the three-dimensional tibia image.
  • step 804 the 3D reconstructed 3D femur image and the 3D tibia image are visually displayed.
  • Step 805 According to the visualization result, it is judged whether the image segmentation that is the basis for generating the 3D skeleton image needs to be optimized. If the image segmentation that is the basis for generating the 3D skeleton image needs to be optimized, step 806 is executed; For optimization, step 807 is executed.
  • Step 806 Receive the input division adjustment instruction, and return to step 802.
  • Step 807 Identify key points and key axes of the three-dimensional bone image, and calculate size parameters and angle parameters of the femur and tibia according to the key points and the key axes, respectively.
  • Step 808 the system recommends a matched unicondylar prosthesis model according to key points and key axes, size parameters and angle parameters of femur and tibia;
  • Step 809 adjust the placement position and angle of the unicondylar prosthesis model
  • Step 810 simulate the osteotomy, and simulate the post-operative result preview.
  • 3D reconstruction is performed on the basis of artificial intelligence segmentation, and intelligent identification of femoral alignment, tibial alignment, AKAGI line, the lowest point of the distal femur, the lowest point of the tibial plateau, the anteroposterior diameter of the distal femur, and the tibial plateau.
  • This embodiment overcomes the defects caused by the individual differences of patients and the subjective experience of doctors to artificial unicondylar replacement surgery, realizes the matching of unicondylar replacement prostheses based on artificial intelligence, provides accurate and powerful technical support and guarantee for doctors, and makes single condyle replacement surgery possible. Condylar replacement surgery is more accurate and safer, and promotes the development of surgery in the direction of intelligence and precision.
  • FIG. 9 is an image processing device for a unicondylar joint according to the present invention.
  • the device includes an acquisition module 90 , a recognition and calculation module 92 , and a prosthesis matching module 94 .
  • the acquisition module 90 is configured to acquire knee joint image data, and obtain a three-dimensional bone image based on the knee joint image data; wherein the three-dimensional bone image includes a three-dimensional femur image and a three-dimensional tibia image.
  • the identification and calculation module 92 is used to identify the key points and key axes of the three-dimensional bone image, and display them; and, according to the key points and key axes, respectively calculate the size parameters and angle parameters of the femur and the tibia;
  • the prosthesis matching module 94 is used to perform unicondylar prosthesis matching in the database of pre-stored prosthesis models based on key points, key axes, dimension parameters and angle parameters, and visualize the matching effect of the unicondylar prosthesis.
  • This embodiment identifies the key points and key axes of the femur and the tibia in the image through the three-dimensional femur image and the three-dimensional tibia image generated based on the knee joint image data, and calculates the size parameters and the tibia respectively according to the key points and the key axes.
  • the angle parameters are used to match the unicondylar prosthesis through the key points, key axes, respective size parameters and angle parameters of the femur and the tibia, and visualize the matching effect of the unicondylar prosthesis.
  • the invention overcomes the defects caused by the individual differences of patients and the subjective experience of doctors to the artificial unicondylar replacement operation, realizes the matching of unicondylar replacement prostheses based on artificial intelligence, provides accurate and powerful technical support and guarantee for doctors, and makes unicondylar replacement surgery possible. Replacement surgery is more accurate and safer, and promotes the development of surgery in the direction of intelligence and precision.
  • the acquisition module 90 includes: an image segmentation unit 901 , a three-dimensional reconstruction unit 902 and a segmentation adjustment unit 903 .
  • the image segmentation unit 901 is configured to obtain image data of the knee joint, and perform image segmentation on the image data based on a deep learning algorithm;
  • the three-dimensional reconstruction unit 902 is configured to perform three-dimensional reconstruction based on the segmented image data, obtain a three-dimensional femur image and a three-dimensional tibia image, and display them visually.
  • the 3D reconstruction unit it can also include:
  • the segmentation adjustment unit 903 is configured to determine whether the segmentation of the image data of the knee joint needs to be optimized, and if so, receive an input segmentation adjustment instruction to adjust the segmentation of the image data of the knee joint.
  • the image segmentation unit 901 performs image segmentation on the image data based on the segmentation neural network model; and the associated parameters of the segmentation neural network model are determined by training and testing based on the image dataset in the lower limb medical image database.
  • the image dataset in the lower extremity medical image database is the lower extremity medical image dataset with the femur, tibia, fibula and patella regions marked, and the image dataset is divided into training set and test set; the unlabeled medical image data is converted into the first A picture in one format is saved, and the marked data is converted into a picture in a second format and saved.
  • the segmentation neural network is 2D Dense-Unet, FCN, SegNet, Unet, 3D-Unet, Mask-RCNN, Atrous Convolution, ENet, CRFasRNN, PSPNet, ParseNet, RefineNet, ReSeg, LSTM-CF, DeepMask, DeepLabV1 , at least one of DeepLabV2 and DeepLabV3.
  • the key points are key anatomical sites; and the key anatomical sites are identified through HRNet, MTCNN, locnet, Pyramid Residual Module, Densenet, hourglass, resnet, SegNet, Unet, R - Implementation of at least one neural network model among CNN, Fast R-CNN, Faster R-CNN, R-FCN, SSD.
  • the key points of a 3D skeleton image include one or more of the following combinations:
  • the key axes of the 3D skeletal image include one or more of the following combinations: femoral mechanical axis, femoral anatomical axis, tibial mechanical axis, tibial anatomical axis, and the connection between the medial border of the tibial tubercle and the midpoint of the posterior cruciate ligament insertion Wire;
  • Size parameters include one or more combinations of the following: anteroposterior diameter of femur, medial and medial diameter of femoral condyle, anterior and posterior diameter of tibial plateau, and posterior inclination angle of tibial plateau.
  • the angle parameters include one or more combinations of the following: the posterior inclination angle of the tibial plateau, the included angle between the femoral mechanical axis and the tibial mechanical axis, and the included angle between the femoral anatomical axis and the tibial anatomical axis.
  • FIG. 10 illustrates a schematic diagram of the physical structure of an electronic device.
  • the electronic device may include: a processor (processor) 1010, a communication interface (Communications Interface) 1020, a memory (memory) 1030 and a communication bus 1040,
  • the processor 1010 , the communication interface 1020 , and the memory 1030 communicate with each other through the communication bus 1040 .
  • the processor 1010 can invoke logic instructions in the memory 1030 to execute a method for processing images of a unicondylar joint, the method comprising:
  • the three-dimensional bone image includes a three-dimensional femur image and a three-dimensional tibia image
  • the unicondylar prosthesis is matched in the database of the pre-stored prosthesis model, and the matching effect of the unicondylar prosthesis is displayed visually.
  • the above-mentioned logic instructions in the memory 1030 can be implemented in the form of software functional units and can be stored in a computer-readable storage medium when sold or used as an independent product.
  • the technical solution of the present invention can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
  • the present invention also provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer When executing, the computer can execute the unicondylar joint image processing method provided by the above methods, and the method includes:
  • the three-dimensional bone image includes a three-dimensional femur image and a three-dimensional tibia image
  • the unicondylar prosthesis is matched in the database of the pre-stored prosthesis model, and the matching effect of the unicondylar prosthesis is displayed visually.
  • the present invention also provides a non-transitory computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, it is implemented to perform the above-mentioned unicondylar joint images provided.
  • the three-dimensional bone image includes a three-dimensional femur image and a three-dimensional tibia image
  • the unicondylar prosthesis is matched in the database of the pre-stored prosthesis model, and the matching effect of the unicondylar prosthesis is displayed visually.
  • the device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment. Those of ordinary skill in the art can understand and implement it without creative effort.
  • each embodiment can be implemented by means of software plus a necessary general hardware platform, and certainly can also be implemented by hardware.
  • the above-mentioned technical solutions can be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic A disc, an optical disc, etc., includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in various embodiments or some parts of the embodiments.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Robotics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)
  • Prostheses (AREA)
  • Image Processing (AREA)

Abstract

本发明提供一种单髁关节图像的处理方法、装置、设备和存储介质,其中的方法包括:获取膝关节图像数据,基于膝关节图像数据获得三维骨骼图像;其中,三维骨骼图像包括三维股骨图像和三维胫骨图像;识别和显示三维骨骼图像的关键点和关键轴线;并且,依据关键点和关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;基于关键点、关键轴线、尺寸参数和角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。本发明克服了由于患者个体差异及医生主观经验给人工单髁置换手术所带来的缺陷,实现基于人工智能的单髁置换假体匹配,为医生提供准确有力的技术支持与保障,使单髁置换外科手术更准确、更安全。

Description

单髁关节图像的处理方法、装置、设备和存储介质
本申请要求在2021年2月10日提交中国专利局、申请号为CN202110185454.0、发明名称为“基于深度学习的单髁置换术前规划方法和相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及人工智能技术领域,尤其涉及一种单髁关节图像的处理方法、装置、设备和存储介质。
背景技术
单髁膝关节置换术(Unicompartmental Knee Arthroplasty,简称UKA)是膝关节骨性关节炎阶梯化治疗中必不可少的一项。
近年来的临床研究发现,在内侧单间室膝关节MOA(Medial compartmental knee Osteoarthritis,MOA)的治疗中,相较于人工全膝关节置换术(Total Knee Arthroplasty,简称TKA)而言,UKA手术治疗膝关节生物功能几乎全部保留,具有手术时间短、手术创伤小、术后并发症发生率低等优势,且术后的功能恢复也优于TKA手术。但UKA对术后下肢力线要求高,轻度的力线偏移即会影响假体使用寿命,导致返修率增高。
目前的膝关节UKA手术对于下肢力线及软组织平衡方面的把握仍依赖于手术医师的经验,对于膝关节单髁置换术中的截骨参数(比如截骨角度、截骨量)甚至假体大小的选择,是通过手术医生的“目测法”来把握,患者的个体差异及手术医生对器械掌握的熟练程度都可能会影响到手术效果。
发明内容
本发明提供一种单髁关节图像的处理方法、装置、电子设备及存储介质,用以克服由于患者个体差异及医生主观经验给人工单髁置换手术所带来的缺陷,实现基于人工智能的单髁置换假体匹配。
本发明提供了一种单髁关节图像的处理方法,基于对图像数据的深度学习进行单髁假体匹配,该方法包括如下步骤:获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像;其中,所述三维骨骼图像包括三维股骨图像和三维胫骨图像;识别和显示所述三维骨骼图像的关键点和关键轴线;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;基于所述关键点、所述关键轴线、所述尺寸参数和所述角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
根据本发明单髁关节图像的处理方法,所述基于所述膝关节图像数据获得三维骨骼图像包括如下步骤:获取膝关节的图像数据,基于深度学习算法对所述图像数据进行图像分割;基于分割后的图像数据进行三维重建,获得所述三维股骨图像和三维胫骨图像,并可视化显示。
根据本发明单髁关节图像的处理方法,所述基于分割后的图像数据进行三维重建,获得所述三维股骨图像和三维胫骨图像,并可视化显示后,还包括:判断针对所述膝关节的图像数据的分割是否需要优化,若针对所述膝关节的图像数据的分割需要优化,则接收输入的分割调整指令,对所述膝关节的图像数据的分割进行调整。
根据本发明单髁关节图像的处理方法,所述基于深度学习算法对所述图像数据进行图像分割为:基于分割神经网络模型对所述图像数据进行图像分割;并且,所述分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定;其中,所述下肢医学图像数据库中的图像数据集为标注出股骨、胫骨、腓骨和髌骨区域的下肢医学图像数据集,所述图像数据集划分为训练集和测试集。
根据本发明单髁关节图像的处理方法,所述分割神经网络为2D Dense-Unet、FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask、DeepLabV1、DeepLabV2、DeepLabV3中的至少一种。
根据本发明单髁关节图像的处理方法,所述识别所述三维骨骼图像的关键点中,所述关键点为关键解剖位点;并且
所述关键解剖位点的识别通过HRNet、MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种神经网络模型实现。
根据本发明单髁关节图像的处理方法,所述三维骨骼图像的关键点包括以下一种或多种组合:股骨远端最低点、胫骨平台最低点,以及,胫骨平台内外侧缘;所述三维骨骼图像的所述关键轴线包括以下一种或多种组合:股骨机械轴、股骨解剖轴、胫骨机械轴、胫骨解剖轴,以及,胫骨结节内侧缘与后交叉韧带止点中点的连线;所述尺寸参数包括以下一种或多种组合:股骨前后径、股骨髁内外径、胫骨平台前后径、胫骨平台后倾角;所述角度参数包括以下一种或多种组合:胫骨平台后倾角、股骨机械轴与胫骨机械轴的夹角、股骨解剖轴与胫骨解剖轴的夹角。
根据本发明单髁关节图像的处理方法,所述分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定,包括:
将待分割原数据和对应的经标注后的像素级标注数据作为所述分割神经网络模型的输入数据,对所述分割神经网络模型进行训练;
若所述分割神经网络模型在验证集中的评估指标达到预设模型评估指标后,停止训 练,获得所述分割神经网络模型的关联参数;
若所述分割神经网络模型在验证集中的评估指标没有达到预设模型评估指标后,继续训练所述分割神经网络模型,直至所述分割神经网络模型在所述验证集中的评估指标达到所述预设模型评估指标。
第二方面,本发明还提供了一种单髁关节图像的处理装置,该装置包括:获取模块、识别及计算模块和假体匹配模块。其中,获取模块,被配置为获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像;其中,所述三维骨骼图像包括三维股骨图像和三维胫骨图像;识别及计算模块,被配置为识别所述三维骨骼图像的关键点和关键轴线,并显示;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;假体匹配模块,被配置为基于所述关键点、所述关键轴线、所述尺寸参数和所述角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
根据本发明所述的单髁关节图像的处理装置,所述获取模块包括图像分割单元和三维重建单元;所述图像分割单元,被配置为获取膝关节的图像数据,基于深度学习算法对图像数据进行图像分割;所述三维重建单元,被配置为基于分割后的图像数据进行三维重建,获得三维股骨图像和三维胫骨图像,并可视化显示。
根据本发明所述的单髁关节图像的处理装置,所述获取模块还包括分割调整单元;所述分割调整单元,被配置为判断针对膝关节的图像数据的分割是否需要优化,若针对膝关节的图像数据的分割需要优化,则接收输入的分割调整指令,对膝关节的图像数据的分割进行调整。
根据本发明所述的单髁关节图像的处理装置,所述图像分割单元在基于深度学习算法对所述图像数据进行图像分割时,可以包括:基于分割神经网络模型对所述图像数据进行图像分割;并且,所述分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定;其中,所述下肢医学图像数据库中的图像数据集为标注出股骨、胫骨、腓骨和髌骨区域的下肢医学图像数据集,所述图像数据集划分为训练集和测试集。
根据本发明所述的单髁关节图像的处理装置,所述分割神经网络为2D Dense-Unet、FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask、DeepLabV1、DeepLabV2、DeepLabV3中的至少一种。
根据本发明所述的单髁关节图像的处理装置,所述关键点为关键解剖位点;并且,所述关键解剖位点的识别通过HRNet、MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种神经网络模型实现。
根据本发明所述的单髁关节图像的处理装置,所述三维骨骼图像的关键点包括以下一 种或多种组合:股骨远端最低点、胫骨平台最低点,以及,胫骨平台内外侧缘;所述三维骨骼图像的所述关键轴线包括以下一种或多种组合:股骨机械轴、股骨解剖轴、胫骨机械轴、胫骨解剖轴,以及,胫骨结节内侧缘与后交叉韧带止点中点的连线;所述尺寸参数包括以下一种或多种组合:股骨前后径、股骨髁内外径、胫骨平台前后径、胫骨平台后倾角;所述角度参数包括以下一种或多种组合:胫骨平台后倾角、股骨机械轴与胫骨机械轴的夹角、股骨解剖轴与胫骨解剖轴的夹角。
根据本发明所述的单髁关节图像的处理装置,其中,所述分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定,包括:
将待分割原数据和对应的经标注后的像素级标注数据作为所述分割神经网络模型的输入数据,对所述分割神经网络模型进行训练;
若所述分割神经网络模型在验证集中的评估指标达到预设模型评估指标后,停止训练,获得所述分割神经网络模型的关联参数;
若所述分割神经网络模型在验证集中的评估指标没有达到预设模型评估指标后,继续训练所述分割神经网络模型,直至所述分割神经网络模型在所述验证集中的评估指标达到所述预设模型评估指标。
本发明还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述任一种所述的单髁关节图像的处理方法的步骤。
本发明还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述任一种所述的单髁关节图像的处理方法的步骤。
本发明提供的一种单髁关节图像的处理方法、装置、电子设备及存储介质,通过基于膝关节图像数据所生成的三维股骨图像和三维胫骨图像,识别图像中股骨和胫骨的关键点和关键轴线,并且,依据关键点和关键轴线分别计算股骨与胫骨的尺寸参数和角度参数,通过股骨与胫骨的关键点、关键轴线、各自的尺寸参数和角度参数进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
本发明克服了由于患者个体差异及医生主观经验给人工单髁置换手术所带来的缺陷,实现基于人工智能的单髁置换假体匹配,为医生提供准确有力的技术支持与保障,使单髁置换外科手术更准确、更安全,促进外科手术向智能化、精准化的方向发展。
附图说明
为了更清楚地说明本发明或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明提供的单髁关节图像的处理方法的流程示意图之一;
图2是本发明提供的单髁关节图像的处理方法中,基于膝关节图像数据获得三维骨骼图像的流程示意图;
图3是本发明单髁关节图像的处理方法中,基于分割神经网络和三维重建,将膝关节图像数据转换为三维骨骼图像的工作原理图;
图4是本发明单髁关节图像的处理方法中,基于三维重建所生成的三维骨骼图像示意图;
图5是本发明单髁关节图像的处理方法中,关键点识别的示意图;
图6是本发明单髁关节图像的处理方法中,安放假体的效果图;
图7是本发明单髁关节图像的处理方法中,模拟术后预览的效果图;
图8是本发明提供的单髁关节图像的处理方法的流程示意图之二;
图9是本发明提供的单髁关节图像的处理装置的结构示意图;
图10是本发明提供的电子设备的结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明中的附图,对本发明中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
参照图1,图1为本发明提供的单髁关节图像的处理方法的流程示意图之一,该方法包括如下步骤:
步骤110,获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像。其中,三维骨骼图像包括三维股骨图像和三维胫骨图像。
步骤120,识别并显示三维骨骼图像的关键点和关键轴线;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数。
步骤130,基于所述关键点、所述关键轴线、所述尺寸参数和所述角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
本实施例通过基于膝关节图像数据所生成的三维股骨图像和三维胫骨图像,识别图像中股骨和胫骨的关键点和关键轴线,并且,依据关键点和关键轴线分别计算股骨与胫骨的尺寸参数和角度参数,通过股骨与胫骨的关键点、关键轴线、各自的尺寸参数和角度参数进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
本发明克服了由于患者个体差异及医生主观经验给人工单髁置换手术所带来的缺陷,实现基于人工智能的单髁置换假体匹配,为医生提供准确有力的技术支持与保障,使单髁置换外科手术更准确、更安全,促进外科手术向智能化、精准化的方向发展。
下面,对本发明的单髁关节图像处理方法做进一步地说明。
步骤110,获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像。
在一个实施例中,该步骤中的膝关节图像数据可以为CT(Computed Tomography,电子计算机断层扫描)图像数据,也可以为磁共振成像(MRI)图像数据。但本发明并不限于此,其他关于膝关节的医学影像数据也可以为本发明所使用。数据格式可以为现有格式,如dicom格式。
在具体实施时,将膝关节图像数据转换为三维股骨图像和三维胫骨图像,可以借助于人工智能中的深度学习算法。可以为:
1)获取膝关节的图像数据,基于深度学习算法对所述图像数据进行图像分割;
2)基于分割后的图像数据进行三维重建,获得所述三维股骨图像和三维胫骨图像,并可视化显示。
下面结合图2,说明本发明的一个实施例,如何借助深度学习算法获得三维股骨图像和三维胫骨图像。
参照图2,图2是本发明提供的单髁关节图像处理方法中,基于膝关节图像数据获得三维骨骼图像的流程示意图,包括如下步骤:
步骤1101,获取膝关节的图像数据。
步骤1102,基于深度学习算法对所述图像数据进行图像分割。
人工智能(Artificial Intelligence,缩写AI),是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器,该领域的研究包括机器人、语言识别、图像识别、自然语言处理和专家系统等。人工智能可以对人的意识、思维的信息过程的模拟。
深度学习(Deep Learning,缩写DL)是机器学习(Machine Learning,缩写ML)领域中一个新的研究方向,它被引入机器学习使其更接近于最初的目标——人工智能。深度学习是学习样本数据的内在规律和表示层次,这些学习过程中获得的信息对诸如文字,图像和声音等数据的解释有很大的帮助。它的最终目标是让机器能够像人一样具有分析学习能力,能够识别文字、图像和声音等数据。
在一个实施例中,深度学习算法为分割神经网络模型,也就是说,基于分割神经网络模型对图像数据进行图像分割。
分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定。其中,所述下肢医学图像数据库中的图像数据集为标注出股骨、胫骨、腓骨和髌骨区域的下肢医学图像数据集,所述图像数据集划分为训练集和测试集;将未标注前的医学图像数据转换成第一格式的图片并保存,将标注后的数据转换成第二格式的图片并保存。
参照图3,示出了本发明单髁关节图像处理方法中,基于分割神经网络和三维重建,将膝关节图像数据转换为三维骨骼图像的工作原理图。
分割神经网络模型的输入信息为膝关节图像数据,例如图3所示出的膝关节图像数据A1,膝关节图像数据A2,膝关节图像数据A3,…,膝关节图像数据An-1,以及,膝关节图像数据An。
分割神经网络的输出端与三维重建模块3的输入端连接,通过三维重建,生成三维骨骼图像数据,如上所述,包括三维股骨图像数据和三维胫骨图像数据。
在具体实施时,分割神经网络可以包括:2D Dense-Unet、FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask、DeepLabV1、DeepLabV2、DeepLabV3中的至少一种。
分割神经网络的关联参数通过基于预先存储的下肢医学图像数据库中的图像数据进行训练和测试确定。
以采用2D Dense-Unet进行分割为例,包括:
数据预处理:
获取患有膝关节疾病患者的CT医学图像数据集,将其进行手动标注股骨、胫骨、腓骨、髌骨区域,将其作为我们的数据库。按照7:3的比例划分为训练集、测试集;将二维横断面DICOM数据转换成JPG格式的图片,标注文件转换成png格式的图片,保存后作为神经网络的输入。
建立分割神经网络模型DenseUnet:
2D Dense-Unet在Unet模型的基础上引入denseblock结构,使分割结果更加准确,相对于传统分割方法分割精度大大提升。
搭建网络模型:
Unet的结构中包含两个亮点即U型结构和跃层链接(skip-connection)。Unet中的降采样(encoder)和上采样(decoder)操作,将降采样得到的高级语义特征图恢复到原图片的分辨率。相比于FCN和Deeplab等,Unet共进行了多次上采样,并在同一个stage使用了skip connection,而不是直接在高级语义特征上进行监督和loss反传,这样就保证了最后恢复出来的特征图融合了更多的底层的图像特征,也使得不同尺度的特征得到了的融合,从而可以进行多尺度预测和超分辨率预测。多次上采样也使得分割图恢复边缘等信息更加精细。
DenseNet具有非常好的抗过拟合性能,尤其适合于训练数据相对匮乏的应用。对于DenseNet抗过拟合的原因有一个比较直观的解释:神经网络每一层提取的特征都相当于对输入数据的一个非线性变换,而随着深度的增加,变换的复杂度也逐渐增加(更多非线性函数的复合)。相比于一般神经网络的分类器直接依赖于网络最后一层(复杂度最高)的特征,DenseNet可以综合利用浅层复杂度低的特征,因而更容易得到一个光滑的具有更好 泛化性能的决策函数。
因此受到DenseNet密集连接的启发,将UNet的每一个子模块分别替换为具有密集连接的形式,即在Unet中引入dense block,由于结合了二者的优势,所以分割效果更好,准确度更高。
训练过程:
髋关节骨肉分割/股骨分割网络的输入分别为待分割原数据和对应的医生标注的骨骼/股骨像素级标注数据,即图像对应的标签。网络训练时,将训练集的原数据和对应标签依次送入网络,对网络进行训练。训练过程中,根据自定义的模型评估指标,如IOU(模型学习结果与真实标签之间的交并比),precision(精准度),recall(召回率),F-measure(F值)等指标来观察模型的训练情况,当模型在验证集中的评估指标达到预期时,则停止训练,保存当前模型对应的权重文件,当模型在验证集中的评估指标未达到预期时,继续调优模型,直至模型在验证集中的评估指标达到最优。
测试过程:
网络预测时,首先导入预先保存的最优模型权重文件,然后将待分割数据输入模型,模型的输出结果即为分割结果。
步骤1103,基于分割后的图像数据进行三维重建,获得所述三维股骨图像和三维胫骨图像。
三维重建(3D Reconstruction)是指对三维物体建立适合计算机表示和处理的数学模型,是在计算机环境下对其进行处理、操作和分析其性质的基础,也是在计算机中建立表达客观世界的虚拟现实的技术。
步骤1104,可视化显示三维重建的三维股骨图像和三维胫骨图像。
参照图4,图4示出了本发明单髁关节图像处理方法中,基于三维重建所生成的三维骨骼图像。其中,左侧区域4a示出了骨骼在横断面、矢状面和冠状面的二维视图,右侧区域4b示出了骨骼的三维重建图像。
从图4可以清楚地看到位于图4中部上方的三维股骨图像和下方的三维胫骨图像。在图4中,除了可以看到三维股骨图像和三维胫骨图像外,可以看到腓骨、髌骨和籽骨等三维结构。
步骤1105,根据可视化结果,判断作为三维骨骼图像生成基础的图像分割是否需要优化,若作为三维骨骼图像生成基础的图像分割需要优化,执行步骤1106;若作为三维骨骼图像生成基础的图像分割不需要优化,执行步骤1107。
可选地,可以根据图4可视化的结果,判断步骤1102对全膝图像数据的分割是否合理。这里的合理与否,可以由人工检查后确定。
从图4所示的可视化的界面上,除了有重建的三维骨骼图像,还有位于左侧的自上而下的横断面CT、矢状面CT和冠状面CT图像。横断面CT、矢状面CT、冠状面CT图像 与三维骨骼图像可以实现三轴联动,在二维和三维上同时观察。还可以调整三维重建的骨骼的透明或不透明状态,以及,调整分割后的股骨、胫骨、腓骨、髌骨的显示或隐藏状态,观察关节面。
步骤1106,接收输入的分割调整指令,并返回执行步骤1102。
步骤1107,结束三维骨骼图像生成操作。
下面对步骤120进行说明。
步骤120,识别三维骨骼图像的关键点和关键轴线,并显示;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数。
在一个实施例中,从诸如图4的三维骨骼图像中识别关键点和关键轴线,可以采用人工神经网络模型来实现。
例如,关键点识别可以通过HRNet、MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种神经网络模型实现。
以采用HRNet进行识别为例,包括:
数据预处理:
获取患有膝关节疾病患者的CT医学图像数据集,将其正投影层面截取出来,并且使用人工标点插件手动标定小转子等关键点位,将其作为我们的数据库。按照7:3的比例划分为训练集、测试集。
搭建网络模型:
使用HRNet神经网络进行识别关键点,大多数的方法都是通过一个串联的high-to-low从高分辨率输入中获得低分辨率特征图,然后再从低分辨率特征图恢复分辨率特征图。使用的该网络在整个过程中都维持高分辨率特征图。
从一个高分辨率子网络开始,逐渐一个个的增加high-to-low resolution子网络构成更多的阶段,然后将多分辨率子网络并行连接。本申请进行了反复的多尺度特征融合来使得每个high-to-low resolution特征图能够从其他平行的特征图中不断接收信息,最终得到丰富的高分辨率特征图。主干部分主要采用high-to-low和low-to-high框架,并使用多尺度融合和中间监督来尽可能地增强信息,high-to-low过程旨在生成低分辨率但更高级别的特征,而low-to-high过程旨在产生高分辨率的特征,这两个过程都可能会重复多次以提升性能。因此,HRNet预测的热图更加准确。HRNet中的高分辨率特征金字塔是从1/4分辨率出发,通过transposed convolution得到更高分辨率的特征。
训练过程:
在训练的过程中,使用多分辨率监督让不同层的特征能学习不同尺度的信息。也利用多分辨率融合,把不同分辨率的热度图统一放到原图大小并且融合到一起,从而得到一个对尺度敏感的特征。
模型训练过程中,输入像素值为0-255的正投影图像和label.txt,可以通过每张图片的名称找到互相对应的点的坐标;若直接用目标点的坐标进行学习,神经网络需要自行将空间位置转换为坐标,是一种比较难学习的训练方式,所以将这些点生成高斯图,用heatmap去监督,即网络的输出是一个与输入大小相同尺寸的特征图,在检测点的位置为1,其他位置为0.多个点的检测就输出多个通道的特征图。网络使用Adam优化,学习率为1e-5,batch_size为4,损失函数使用L2正则化,根据训练过程中损失函数的变化,调整训练批次的大小,最终得到关键点位的坐标值。
测试过程:
网络预测时,首先导入预先保存的最优模型权重文件,然后将待识别数据输入模型,模型的输出结果即为识别结果。
本实施例中的关键点可以为关键解剖位点。在具体实施时,关键解剖位点可以包括关键点和关键轴线。
参照图5,图5是本发明单髁关节图像处理方法中,关键点识别的示意图。图5中部的三维骨骼图像中标记的黑点即为关键点。左侧区域5a示出了骨骼在横断面、矢状面和冠状面的二维视图,右侧区域5b示出了骨骼的三维重建图像中所包括的关键点。
在一个实施例中,三维骨骼图像的关键点可以包括以下一种或多种组合:股骨远端最低点、胫骨平台最低点,以及,胫骨平台内外侧缘。
在一个实施例中,三维骨骼图像的所述关键轴线可以包括以下一种或多种组合:股骨机械轴、股骨解剖轴、胫骨机械轴、胫骨解剖轴,以及,胫骨结节内侧缘与后交叉韧带止点中点的连线。人工检查关键点的识别,对识别位置不准确的关键点进行调整。
识别确定关键点后,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数。
其中,尺寸参数包括以下一种或多种组合:股骨前后径、股骨髁内外径、胫骨平台前后径、胫骨平台后倾角。
角度参数包括以下一种或多种组合:胫骨平台后倾角、股骨机械轴与胫骨机械轴的夹角、股骨解剖轴与胫骨解剖轴的夹角。
下面说明步骤130。
步骤130,基于关键点、关键轴线、所述尺寸参数和所述角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
从三个方面对该步骤做出说明。
(1)关于存储假体模型的数据库,以及,数据库中的单髁假体模型
存储假体模型的数据库为预先存储在系统中的数据。主要存储单髁置换手术用的单髁假体模型。单髁假体模型的型号和大小各不相同。
关于单髁假体模型的设计。在一个实施例中,可以通过对正常人关节进行CT扫描, 运用数字化技术对关节形态及截骨后形态进行测量,然后建立数字化关节模型数据库,为单髁假体模型的设计提供形态学数据。
(2)关于匹配
在上述步骤120,确定了基于患者三维骨骼图像的关键点、关键轴线、尺寸参数和角度参数。
系统基于关键点、尺寸及角度参数等信息,在预先存储假体模型的数据库中寻找匹配对象,并智能推荐。
在智能推荐时,给出单髁假体模型的型号、安放位置及安放角度。恢复患者胫骨平台的生理后倾角,纠正患者的关节畸形。
(3)关于可视化
选择智能推荐的单髁假体模型,将假体模型显示在股骨远端和胫骨平台上。
参照图6,图6是本发明单髁关节图像处理方法中,匹配假体的效果图。从图中可以看到单髁假体6a,三维重建股骨6b,三维重建胫骨6c和三维重建腓骨6d。
在可视化的场景下,还可以包括人工检查假体型号和安放位置,在安放位置、角度存在的偏差时,可以进行微调。
在本发明单髁关节图像的处理方法中,完成假体的匹配后,还可以包括根据截骨参数,模拟截骨,然后通过可视化平台匹配单髁假体模型的步骤。
参照图7,图7是本发明单髁关节图像处理方法中,模拟术后预览的效果图。
从图7可以看到,三维重建股骨7a、股骨假体7b、垫片7c、胫骨假体7d和三维重建胫骨7e。通过模拟患者术后关节面恢复情况,呈现手术效果。
参照图8,图8为本发明提供的单髁关节图像处理方法的流程示意图之二,包括如下步骤:
步骤801,选择膝关节的CT图像数据。
步骤802,基于深度学习算法对所述图像数据进行图像分割。
步骤803,基于分割后的图像数据进行三维重建,获得所述三维股骨图像和三维胫骨图像。
步骤804,可视化显示三维重建的三维股骨图像和三维胫骨图像。
步骤805,根据可视化结果,判断作为三维骨骼图像生成基础的图像分割是否需要优化,若作为三维骨骼图像生成基础的图像分割需要优化,执行步骤806;若作为三维骨骼图像生成基础的图像分割不需要优化,执行步骤807。
步骤806,接收输入的分割调整指令,并返回执行步骤802。
步骤807,识别三维骨骼图像的关键点和关键轴线,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数。
步骤808,根据关键点和关键轴线、股骨与胫骨的尺寸参数和角度参数,系统推荐匹 配的单髁假体模型;
步骤809,调整单髁假体模型安放位置和角度;
步骤810,模拟截骨,并模拟术后结果预览。
以CT图像数据为基础,在人工智能分割的基础上进行三维重建,智能识别股骨力线、胫骨力线、AKAGI线、股骨远端最低点、胫骨平台最低点、股骨远端前后径、胫骨平台前后径、胫骨平台内侧缘、胫骨平台外侧缘,智能计算胫骨平台后倾角,智能推荐单髁假体放置的位置和角度,规划截骨量并进行模拟截骨,纠正患者的关节内畸形。
本实施例克服了由于患者个体差异及医生主观经验给人工单髁置换手术所带来的缺陷,实现基于人工智能的单髁置换假体匹配,为医生提供准确有力的技术支持与保障,使单髁置换外科手术更准确、更安全,促进外科手术向智能化、精准化的方向发展。
参照图9,图9为本发明单髁关节图像处理装置,该装置包括:获取模块90、识别及计算模块92,以及,假体匹配模块94。
其中,获取模块90用于获取膝关节图像数据,基于膝关节图像数据获得三维骨骼图像;其中,三维骨骼图像包括三维股骨图像和三维胫骨图像。
识别及计算模块92用于识别三维骨骼图像的关键点和关键轴线,并显示;并且,依据关键点和关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;
假体匹配模块94用于基于关键点、关键轴线、尺寸参数和角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
本实施例通过基于膝关节图像数据所生成的三维股骨图像和三维胫骨图像,识别图像中股骨和胫骨的关键点和关键轴线,并且,依据关键点和关键轴线分别计算股骨与胫骨的尺寸参数和角度参数,通过股骨与胫骨的关键点、关键轴线、各自的尺寸参数和角度参数进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
本发明克服了由于患者个体差异及医生主观经验给人工单髁置换手术所带来的缺陷,实现基于人工智能的单髁置换假体匹配,为医生提供准确有力的技术支持与保障,使单髁置换外科手术更准确、更安全,促进外科手术向智能化、精准化的方向发展。
在一个优选的实施例中,获取模块90包括:图像分割单元901、三维重建单元902和分割调整单元903。
图像分割单元901,被配置为获取膝关节的图像数据,基于深度学习算法对图像数据进行图像分割;
三维重建单元902,被配置为基于分割后的图像数据进行三维重建,获得三维股骨图像和三维胫骨图像,并可视化显示。
在三维重建单元后,还可以包括:
分割调整单元903,被配置为判断针对膝关节的图像数据的分割是否需要优化,若是,则接收输入的分割调整指令,对膝关节的图像数据的分割进行调整。
在具体实施时,图像分割单元901中,基于分割神经网络模型对图像数据进行图像分割;并且,分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定。
其中,下肢医学图像数据库中的图像数据集为标注出股骨、胫骨、腓骨和髌骨区域下肢医学图像数据集,图像数据集划分为训练集和测试集;将未标注前的医学图像数据转换成第一格式的图片并保存,将标注后的数据转换成第二格式的图片并保存。
可选地,分割神经网络为2D Dense-Unet、FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask、DeepLabV1、DeepLabV2、DeepLabV3中的至少一种。
在具体实施时,识别及计算模块92中,关键点为关键解剖位点;并且,关键解剖位点的识别通过HRNet、MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种神经网络模型实现。
三维骨骼图像的关键点包括以下一种或多种组合:
a)股骨远端最低点、胫骨平台最低点,以及,胫骨平台内外侧缘;
b)三维骨骼图像的关键轴线包括以下一种或多种组合:股骨机械轴、股骨解剖轴、胫骨机械轴、胫骨解剖轴,以及,胫骨结节内侧缘与后交叉韧带止点中点的连线;
c)尺寸参数包括以下一种或多种组合:股骨前后径、股骨髁内外径、胫骨平台前后径、胫骨平台后倾角。
d)角度参数包括以下一种或多种组合:胫骨平台后倾角、股骨机械轴与胫骨机械轴的夹角、股骨解剖轴与胫骨解剖轴的夹角。
图10示例了一种电子设备的实体结构示意图,如图10所示,该电子设备可以包括:处理器(processor)1010、通信接口(Communications Interface)1020、存储器(memory)1030和通信总线1040,其中,处理器1010,通信接口1020,存储器1030通过通信总线1040完成相互间的通信。处理器1010可以调用存储器1030中的逻辑指令,以执行单髁关节图像处理方法,该方法包括:
获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像;其中,所述三维骨骼图像包括三维股骨图像和三维胫骨图像;
识别所述三维骨骼图像的关键点和关键轴线,并显示;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;
基于所述关键点、所述关键轴线、所述尺寸参数和所述角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
此外,上述的存储器1030中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本 发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
另一方面,本发明还提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,计算机能够执行上述各方法所提供的单髁关节图像处理方法,该方法包括:
获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像;其中,所述三维骨骼图像包括三维股骨图像和三维胫骨图像;
识别所述三维骨骼图像的关键点和关键轴线,并显示;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;
基于所述关键点、所述关键轴线、所述尺寸参数和所述角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
又一方面,本发明还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现以执行上述各提供的单髁关节图像的
获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像;其中,所述三维骨骼图像包括三维股骨图像和三维胫骨图像;
识别所述三维骨骼图像的关键点和关键轴线,并显示;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;
基于所述关键点、所述关键轴线、所述尺寸参数和所述角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各 个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (18)

  1. 一种单髁关节图像的处理方法,基于对图像数据的深度学习进行单髁假体匹配,该方法包括如下步骤:
    获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像;其中,所述三维骨骼图像包括三维股骨图像和三维胫骨图像;
    识别和显示所述三维骨骼图像的关键点和关键轴线;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;
    基于所述关键点、所述关键轴线、所述尺寸参数和所述角度参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
  2. 根据权利要求1所述的单髁关节图像处理方法,其中,所述基于所述膝关节图像数据获得三维骨骼图像包括如下步骤:
    获取膝关节的图像数据,基于深度学习算法对所述图像数据进行图像分割;
    基于分割后的图像数据进行三维重建,获得所述三维股骨图像和三维胫骨图像,并可视化显示。
  3. 根据权利要求2所述的单髁关节图像处理方法,所述基于分割后的图像数据进行三维重建,获得所述三维股骨图像和三维胫骨图像,并可视化显示后,还包括:
    判断针对所述膝关节的图像数据的分割是否需要优化,若针对所述膝关节的图像数据的分割需要优化,则接收输入的分割调整指令,对所述膝关节的图像数据的分割进行调整。
  4. 根据权利要求2或3所述的单髁关节图像处理方法,其中,
    所述基于深度学习算法对所述图像数据进行图像分割为:基于分割神经网络模型对所述图像数据进行图像分割;并且,
    所述分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定;
    其中,所述下肢医学图像数据库中的图像数据集为标注出股骨、胫骨、腓骨和髌骨区域的下肢医学图像数据集,所述图像数据集划分为训练集和测试集。
  5. 根据权利要求4所述的单髁关节图像处理方法,其中,
    所述分割神经网络为2D Dense-Unet、FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask、DeepLabV1、DeepLabV2、DeepLabV3中的至少一种。
  6. 根据权利要求1所述的单髁关节图像处理方法,其中,
    所述识别所述三维骨骼图像的关键点中,所述关键点为关键解剖位点;并且
    所述关键解剖位点的识别通过HRNet、MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种神经网络模型实现。
  7. 根据权利要求6所述的单髁关节图像的处理方法,其中,
    所述三维骨骼图像的关键点包括以下至少一种组合:股骨远端最低点、胫骨平台最低点,以及,胫骨平台内外侧缘;
    所述三维骨骼图像的所述关键轴线包括以下至少一种组合:股骨机械轴、股骨解剖轴、胫骨机械轴、胫骨解剖轴,以及,胫骨结节内侧缘与后交叉韧带止点中点的连线;
    所述尺寸参数包括以下至少一种组合:股骨前后径、股骨髁内外径、胫骨平台前后径、胫骨平台后倾角;
    所述角度参数包括以下至少一种组合:胫骨平台后倾角、股骨机械轴与胫骨机械轴的夹角、股骨解剖轴与胫骨解剖轴的夹角。
  8. 根据权利要求4所述的单髁关节图像处理方法,其中,所述分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定,包括:
    将待分割原数据和对应的经标注后的像素级标注数据作为所述分割神经网络模型的输入数据,对所述分割神经网络模型进行训练;
    若所述分割神经网络模型在验证集中的评估指标达到预设模型评估指标后,停止训练,获得所述分割神经网络模型的关联参数;
    若所述分割神经网络模型在验证集中的评估指标没有达到预设模型评估指标后,继续训练所述分割神经网络模型,直至所述分割神经网络模型在所述验证集中的评估指标达到所述预设模型评估指标。
  9. 一种单髁关节图像处理装置,该装置包括:
    获取模块,被配置为获取膝关节图像数据,基于所述膝关节图像数据获得三维骨骼图像;其中,所述三维骨骼图像包括三维股骨图像和三维胫骨图像;
    识别及计算模块,被配置为识别所述三维骨骼图像的关键点和关键轴线,并显示;并且,依据所述关键点和所述关键轴线分别计算股骨与胫骨的尺寸参数和角度参数;
    假体匹配模块,被配置为基于所述关键点、所述关键轴线、所述尺寸参数和所述角度 参数在预先存储假体模型的数据库中进行单髁假体匹配,并将单髁假体匹配效果可视化显示。
  10. 根据权利要求9所述的单髁关节图像处理装置,其中,所述获取模块包括图像分割单元和三维重建单元;
    所述图像分割单元,被配置为获取膝关节的图像数据,基于深度学习算法对图像数据进行图像分割;
    所述三维重建单元,被配置为基于分割后的图像数据进行三维重建,获得三维股骨图像和三维胫骨图像,并可视化显示。
  11. 根据权利要求10所述的单髁关节图像处理装置,所述获取模块还包括分割调整单元;
    所述分割调整单元,被配置为判断针对膝关节的图像数据的分割是否需要优化,若针对膝关节的图像数据的分割需要优化,则接收输入的分割调整指令,对膝关节的图像数据的分割进行调整。
  12. 根据权利要求10所述的单髁关节图像处理装置,其中,所述图像分割单元在基于深度学习算法对所述图像数据进行图像分割时,包括:
    基于分割神经网络模型对所述图像数据进行图像分割;并且,
    所述分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定;其中,所述下肢医学图像数据库中的图像数据集为标注出股骨、胫骨、腓骨和髌骨区域的下肢医学图像数据集,所述图像数据集划分为训练集和测试集。
  13. 根据权利要求12所述的单髁关节图像处理装置,其中,
    所述分割神经网络为2D Dense-Unet、FCN、SegNet、Unet、3D-Unet、Mask-RCNN、空洞卷积、ENet、CRFasRNN、PSPNet、ParseNet、RefineNet、ReSeg、LSTM-CF、DeepMask、DeepLabV1、DeepLabV2、DeepLabV3中的至少一种。
  14. 根据权利要求9所述的单髁关节图像处理装置,其中,所述识别及计算模块在识别所述三维骨骼图像的关键点时:
    所述关键点为关键解剖位点;并且
    所述关键解剖位点的识别通过HRNet、MTCNN、locnet、Pyramid Residual Module、Densenet、hourglass、resnet、SegNet、Unet、R-CNN、Fast R-CNN、Faster R-CNN、R-FCN、SSD中的至少一种神经网络模型实现。
  15. 根据权利要求9所述的单髁关节图像处理装置,其中
    所述三维骨骼图像的关键点包括以下一种或多种组合:股骨远端最低点、胫骨平台最低点,以及,胫骨平台内外侧缘;
    所述三维骨骼图像的所述关键轴线包括以下一种或多种组合:股骨机械轴、股骨解剖轴、胫骨机械轴、胫骨解剖轴,以及,胫骨结节内侧缘与后交叉韧带止点中点的连线;
    所述尺寸参数包括以下一种或多种组合:股骨前后径、股骨髁内外径、胫骨平台前后径、胫骨平台后倾角;
    所述角度参数包括以下一种或多种组合:胫骨平台后倾角、股骨机械轴与胫骨机械轴的夹角、股骨解剖轴与胫骨解剖轴的夹角。
  16. 根据权利要求12所述的单髁关节图像处理装置,其中,所述分割神经网络模型的关联参数通过基于下肢医学图像数据库中的图像数据集进行训练和测试确定,包括:
    将待分割原数据和对应的经标注后的像素级标注数据作为所述分割神经网络模型的输入数据,对所述分割神经网络模型进行训练;
    若所述分割神经网络模型在验证集中的评估指标达到预设模型评估指标后,停止训练,获得所述分割神经网络模型的关联参数;
    若所述分割神经网络模型在验证集中的评估指标没有达到预设模型评估指标后,继续训练所述分割神经网络模型,直至述分割神经网络模型在所述验证集中的评估指标达到所述预设模型评估指标。
  17. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如权利要求1至8中任一项所述的单髁关节图像处理方法。
  18. 一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如权利要求1至8中任一项所述的单髁关节图像处理方法的步骤。
PCT/CN2021/120586 2021-02-10 2021-09-26 单髁关节图像的处理方法、装置、设备和存储介质 WO2022170768A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110185454.0A CN112957126B (zh) 2021-02-10 2021-02-10 基于深度学习的单髁置换术前规划方法和相关设备
CN202110185454.0 2021-02-10

Publications (1)

Publication Number Publication Date
WO2022170768A1 true WO2022170768A1 (zh) 2022-08-18

Family

ID=76284901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/120586 WO2022170768A1 (zh) 2021-02-10 2021-09-26 单髁关节图像的处理方法、装置、设备和存储介质

Country Status (2)

Country Link
CN (1) CN112957126B (zh)
WO (1) WO2022170768A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115381553A (zh) * 2022-09-21 2022-11-25 北京长木谷医疗科技有限公司 复杂性骨性融合膝关节的智能定位装置设计方法及系统
CN115607274A (zh) * 2022-10-08 2023-01-17 仰峰(上海)科技发展有限公司 一种基于骨骼形态学大数据的骨折内固定系统的设计方法
CN115810015A (zh) * 2023-02-09 2023-03-17 慧影医疗科技(北京)股份有限公司 基于深度学习的膝关节自动分割方法、系统、介质及设备
CN116098701A (zh) * 2022-12-27 2023-05-12 北京纳通医用机器人科技有限公司 假体规划方法、装置、电子设备及存储介质
CN116894844A (zh) * 2023-07-06 2023-10-17 北京长木谷医疗科技股份有限公司 一种髋关节图像分割与关键点联动识别方法及装置
CN117058149A (zh) * 2023-10-12 2023-11-14 中南大学 一种用于训练识别骨关节炎的医学影像测量模型的方法
CN117653266A (zh) * 2024-01-31 2024-03-08 鑫君特(苏州)医疗科技有限公司 髁间窝截骨规划装置、髁间窝自动截骨装置和相关设备
CN117765228A (zh) * 2023-11-14 2024-03-26 宁波大学 一种基于隐式和显式结构约束的髋关节关键点检测系统

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112957126B (zh) * 2021-02-10 2022-02-08 北京长木谷医疗科技有限公司 基于深度学习的单髁置换术前规划方法和相关设备
CN113842211B (zh) * 2021-09-03 2022-10-21 北京长木谷医疗科技有限公司 膝关节置换的三维术前规划系统及假体模型匹配方法
CN113919020B (zh) * 2021-09-24 2023-12-12 北京长木谷医疗科技股份有限公司 单髁置换用导板设计方法及相关设备
CN113974827B (zh) * 2021-09-30 2023-08-18 杭州三坛医疗科技有限公司 一种手术参考方案生成方法及装置
CN113974828B (zh) * 2021-09-30 2024-02-09 西安交通大学第二附属医院 一种手术参考方案生成方法及装置
CN113936100B (zh) * 2021-10-12 2024-06-28 大连医科大学附属第二医院 一种人体膝关节十字交叉韧带止点提取与重建方法
CN113870261B (zh) * 2021-12-01 2022-05-13 杭州柳叶刀机器人有限公司 用神经网络识别力线的方法与系统、存储介质及电子设备
CN114693602B (zh) * 2022-03-02 2023-04-18 北京长木谷医疗科技有限公司 膝关节动张力平衡态评估方法及装置
CN114663363B (zh) * 2022-03-03 2023-11-17 四川大学 一种基于深度学习的髋关节医学图像处理方法和装置
CN115393272B (zh) * 2022-07-15 2023-04-18 北京长木谷医疗科技有限公司 基于深度学习的膝关节髌骨置换三维术前规划系统及方法
CN115607286B (zh) * 2022-12-20 2023-03-17 北京维卓致远医疗科技发展有限责任公司 基于双目标定的膝关节置换手术导航方法、系统及设备
CN116071372B (zh) * 2022-12-30 2024-03-19 北京长木谷医疗科技股份有限公司 膝关节分割方法、装置、电子设备及存储介质
CN116758210B (zh) * 2023-02-15 2024-03-19 北京纳通医用机器人科技有限公司 骨面模型的三维重建方法、装置、设备及存储介质
CN116309636B (zh) * 2023-02-21 2024-07-23 北京长木谷医疗科技股份有限公司 基于多任务神经网络模型的膝关节分割方法、装置及设备
CN116115318B (zh) * 2023-04-17 2023-07-28 北京壹点灵动科技有限公司 手术撑开器的调节方法、装置、存储介质和处理器
CN116650110B (zh) * 2023-06-12 2024-05-07 北京长木谷医疗科技股份有限公司 基于深度强化学习的膝关节假体自动放置方法及装置
CN116687434B (zh) * 2023-08-03 2023-11-24 北京壹点灵动科技有限公司 对象的术后角度的确定方法、装置、存储介质和处理器
CN117084787B (zh) * 2023-10-18 2024-01-05 杭州键嘉医疗科技股份有限公司 一种胫骨假体安装的内外旋角的检验方法及相关设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090043556A1 (en) * 2007-08-07 2009-02-12 Axelson Stuart L Method of and system for planning a surgery
CN104537676A (zh) * 2015-01-12 2015-04-22 南京大学 一种基于在线学习的渐进式图像分割方法
US9345548B2 (en) * 2006-02-27 2016-05-24 Biomet Manufacturing, Llc Patient-specific pre-operative planning
CN110197491A (zh) * 2019-05-17 2019-09-03 上海联影智能医疗科技有限公司 图像分割方法、装置、设备和存储介质
CN111179350A (zh) * 2020-02-13 2020-05-19 张逸凌 基于深度学习的髋关节图像处理方法及计算设备
CN111166474A (zh) * 2019-04-23 2020-05-19 艾瑞迈迪科技石家庄有限公司 一种关节置换手术术前的辅助诊查方法和装置
CN111563906A (zh) * 2020-05-07 2020-08-21 南开大学 一种基于深度卷积神经网络的膝关节磁共振图像自动分割方法
CN112957126A (zh) * 2021-02-10 2021-06-15 北京长木谷医疗科技有限公司 基于深度学习的单髁置换术前规划方法和相关设备
CN113017829A (zh) * 2020-08-22 2021-06-25 张逸凌 一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103796609A (zh) * 2011-07-20 2014-05-14 史密夫和内修有限公司 用于优化植入物与解剖学的配合的系统和方法
CN104799950A (zh) * 2015-04-30 2015-07-29 上海昕健医疗技术有限公司 基于医学图像的个性化最小创伤膝关节定位导板
CN107822745A (zh) * 2017-10-31 2018-03-23 李威 精准定制膝关节假体的方法
CN108478250A (zh) * 2018-04-04 2018-09-04 重庆医科大学附属第医院 全膝关节置换术的股骨定位方法、个体化截骨导板及假体

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9345548B2 (en) * 2006-02-27 2016-05-24 Biomet Manufacturing, Llc Patient-specific pre-operative planning
US20090043556A1 (en) * 2007-08-07 2009-02-12 Axelson Stuart L Method of and system for planning a surgery
CN104537676A (zh) * 2015-01-12 2015-04-22 南京大学 一种基于在线学习的渐进式图像分割方法
CN111166474A (zh) * 2019-04-23 2020-05-19 艾瑞迈迪科技石家庄有限公司 一种关节置换手术术前的辅助诊查方法和装置
CN110197491A (zh) * 2019-05-17 2019-09-03 上海联影智能医疗科技有限公司 图像分割方法、装置、设备和存储介质
CN111179350A (zh) * 2020-02-13 2020-05-19 张逸凌 基于深度学习的髋关节图像处理方法及计算设备
CN111563906A (zh) * 2020-05-07 2020-08-21 南开大学 一种基于深度卷积神经网络的膝关节磁共振图像自动分割方法
CN113017829A (zh) * 2020-08-22 2021-06-25 张逸凌 一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备
CN112957126A (zh) * 2021-02-10 2021-06-15 北京长木谷医疗科技有限公司 基于深度学习的单髁置换术前规划方法和相关设备

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115381553A (zh) * 2022-09-21 2022-11-25 北京长木谷医疗科技有限公司 复杂性骨性融合膝关节的智能定位装置设计方法及系统
CN115607274A (zh) * 2022-10-08 2023-01-17 仰峰(上海)科技发展有限公司 一种基于骨骼形态学大数据的骨折内固定系统的设计方法
CN116098701A (zh) * 2022-12-27 2023-05-12 北京纳通医用机器人科技有限公司 假体规划方法、装置、电子设备及存储介质
CN116098701B (zh) * 2022-12-27 2024-07-26 北京纳通医用机器人科技有限公司 假体规划方法、装置、电子设备及存储介质
CN115810015A (zh) * 2023-02-09 2023-03-17 慧影医疗科技(北京)股份有限公司 基于深度学习的膝关节自动分割方法、系统、介质及设备
CN116894844A (zh) * 2023-07-06 2023-10-17 北京长木谷医疗科技股份有限公司 一种髋关节图像分割与关键点联动识别方法及装置
CN116894844B (zh) * 2023-07-06 2024-04-02 北京长木谷医疗科技股份有限公司 一种髋关节图像分割与关键点联动识别方法及装置
CN117058149A (zh) * 2023-10-12 2023-11-14 中南大学 一种用于训练识别骨关节炎的医学影像测量模型的方法
CN117058149B (zh) * 2023-10-12 2024-01-02 中南大学 一种用于训练识别骨关节炎的医学影像测量模型的方法
CN117765228A (zh) * 2023-11-14 2024-03-26 宁波大学 一种基于隐式和显式结构约束的髋关节关键点检测系统
CN117653266A (zh) * 2024-01-31 2024-03-08 鑫君特(苏州)医疗科技有限公司 髁间窝截骨规划装置、髁间窝自动截骨装置和相关设备
CN117653266B (zh) * 2024-01-31 2024-04-23 鑫君特(苏州)医疗科技有限公司 髁间窝截骨规划装置、髁间窝自动截骨装置和相关设备

Also Published As

Publication number Publication date
CN112957126B (zh) 2022-02-08
CN112957126A (zh) 2021-06-15

Similar Documents

Publication Publication Date Title
WO2022170768A1 (zh) 单髁关节图像的处理方法、装置、设备和存储介质
CN113017829B (zh) 一种基于深度学习的全膝关节置换术的术前规划方法、系统、介质和设备
WO2022183719A1 (zh) 基于深度学习的全髋关节置换翻修术前规划方法和设备
CN112842529B (zh) 全膝关节图像处理方法及装置
CN103153239B (zh) 用于优化骨科流程参数的系统和方法
Majstorovic et al. Reverse engineering of human bones by using method of anatomical features
Van den Heever et al. Contact stresses in a patient-specific unicompartmental knee replacement
CN106264731A (zh) 一种基于点对点配准技术虚拟膝关节单髁置换术模型构建的方法
CN114431957B (zh) 基于深度学习的全膝关节置换术后翻修术前规划系统
CN107106239A (zh) 外科规划和方法
US20230085093A1 (en) Computerized prediction of humeral prosthesis for shoulder surgery
Wu et al. A graphical guide for constructing a finite element model of the cervical spine with digital orthopedic software
Moldovan et al. Integration of three-dimensional technologies in orthopedics: a tool for preoperative planning of tibial plateau fractures
KR20220106113A (ko) 수술 전 수술 플래닝을 용이하게 하기 위해 생리학적으로 건강하고 그리고 생리학적으로 결함이 있는 해부학적 구조의 재구성 및 특성화를 위한 시스템 및 방법
US20240065766A1 (en) Computer-assisted surgical planning
WO2020205245A1 (en) Closed surface fitting for segmentation of orthopedic medical image data
Wang et al. An automatic extraction method on medical feature points based on PointNet++ for robot‐assisted knee arthroplasty
CN114191075A (zh) 一种个性化膝关节假体模型的快速构建方法及系统
Asvadi et al. Bone surface reconstruction and clinical features estimation from sparse landmarks and statistical shape models: A feasibility study on the femur
Willing et al. Evaluation of a computational model to predict elbow range of motion
Schneble et al. Three-Dimensional Imaging of the Patellofemoral Joint Improves Understanding of Trochlear Anatomy and Pathology and Planning of Realignment
Zhou et al. Improving inter-fragmentary alignment for virtual 3D reconstruction of highly fragmented bone fractures
CN111986800B (zh) 一种以关节运动功能为核心的骨科知识图谱的构建方法
US20240087716A1 (en) Computer-assisted recommendation of inpatient or outpatient care for surgery
Ramme et al. Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925419

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21925419

Country of ref document: EP

Kind code of ref document: A1