CN114693981A - Automatic knee joint feature point identification method - Google Patents
Automatic knee joint feature point identification method Download PDFInfo
- Publication number
- CN114693981A CN114693981A CN202210400336.1A CN202210400336A CN114693981A CN 114693981 A CN114693981 A CN 114693981A CN 202210400336 A CN202210400336 A CN 202210400336A CN 114693981 A CN114693981 A CN 114693981A
- Authority
- CN
- China
- Prior art keywords
- knee joint
- layer
- image
- feature
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Computer Graphics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
A knee joint feature point automatic identification method comprises the following steps: acquiring a lower limb CT of a patient to be identified; performing knee joint surface model three-dimensional reconstruction according to lower limb CT image data, and obtaining a knee joint three-dimensional point cloud model by a down-sampling technology; and automatically identifying knee joint characteristic points in the point cloud model through a neural network. According to the knee joint feature point identification method, manual marking of image information is not needed, effective image information is directly and automatically extracted from CT to predict the position of the knee joint feature point, and high knee joint feature point positioning accuracy can be obtained.
Description
Technical Field
The application relates to the technical field of image processing, in particular to a knee joint feature point automatic identification method for medical images.
Background
In a robotic assisted knee replacement surgical system, registration is a critical step in achieving the overall system. Currently, the mainstream registration method is registration based on medical feature points, that is, medical feature points are manually marked on an image of a patient before an operation, and corresponding feature points are marked on the knee joint of the patient during the operation to realize the registration of a navigation system. The manual marking method is time-consuming and labor-consuming, different doctors have different marking preferences, inconsistency of operation results is easily caused, and by using the method for automatically identifying the feature points, preoperative efficiency can be improved and the consistency of the operation results can be ensured. In the aspect of feature point identification, deep learning is applied more, wherein the point cloud data processing speed is higher, and the operation requirements can be met. The method automatically extracts the knee joint characteristic points aiming at the knee joint point cloud model, improves the speed of preoperative image processing, improves the intelligent level of an auxiliary operation system, and ensures the consistency of the operation.
Disclosure of Invention
The application aims to provide an image processing method for medical images.
The application provides an image processing method for medical images, which comprises the following steps: acquiring a medical image, wherein the medical image comprises an image of a human bone; generating a lower limb skeleton three-dimensional model of the medical image; obtaining a point cloud data set by the three-dimensional simulation model in a uniform down-sampling mode; and mapping the point cloud features in the feature extraction layer to a preset interval by using a feature regression layer to determine a regression result of the medical image, wherein the regression result refers to the coordinate position of the feature point in the medical image.
Generating a 3D surface model of the medical image, wherein the 3D surface model displays a regression result of the medical image in a visualized manner.
And reconstructing a three-dimensional simulation model by the CT image, wherein the three-dimensional simulation model is output in an STL file format.
In an STL file in binary format, each file is given triangular patch information by a fixed number of bytes. Because the file in the binary format is small, in order to ensure the speed of network processing, the method carries out characteristic point identification on the STL file in the binary format.
The CT image includes: the slice increment of the CT image is at most 0.7mm, the scanning range of the CT machine is from 10cm above the femoral head to 5cm below the ankle, the image size is 512 x 512 pixels, and the image is stored in a DICOM file format.
Different from image formats such as JPG and GIF, the DICOM file contains not only image information but also pathological information of patients. Most DICOM files are composed of a DICOM file header and a DICOM data set.
The DICOM file header is an identifier of the DICOM file, and includes a file preamble, a DICOM prefix, and a file meta-information element. The DICOM data set is formed by arranging DICOM data according to a certain sequence. The storage area is composed of two storage areas, one part of which stores image file data and the other part of which stores non-image data.
DICOM files of CT data for each patient were collected and processed using the segmentation and surface modeling software micims 20.0, and the output of the segmentation and reconstruction process was a three-dimensional model of the lower extremity bone of each patient.
Reconstructing a three-dimensional simulation model from the CT images includes: the method comprises the steps of dividing a target tissue to be selected by selecting a specific gray value interval, manually filling a hole in a mask and deleting redundant pixel points, selecting a three-dimensional calculation command to establish a three-dimensional simulation model, and then performing Smooth and Wrap operations on the three-dimensional simulation model.
And obtaining a point cloud data set by the three-dimensional simulation model in a uniform down-sampling mode.
The network optimization uses an SGD optimizer, and MSE Loss is adopted as an optimization target.
In order to expand the data volume of the training set, a data enhancement operation is performed. And performing random angle rotation and translation on the point cloud model of each patient to enhance the robustness of the network.
And carrying out normalization operation on the data to accelerate network convergence.
The feature point recognition network model comprises a feature extraction layer and a feature regression layer, wherein the feature extraction layer comprises a sampling layer, a grouping layer and a PointNet layer.
And the characteristic extraction layer of the network is connected with a PointNet layer after the operations of the sampling layer and the grouping layer are carried out each time.
The feature extraction layer is composed of a plurality of Set Abstraction layers, and each Set Abstraction layer is composed of a sampling layer, a grouping layer and a PointNet layer.
The feature extraction layer performs feature iterative extraction on the local region of the point set by using the PointNet according to the spatial distance, so that features with larger and larger local scales can be learned.
Since the point set distribution is often uneven and the network performance is degraded if the point set distribution is uniform by default, the method uses an adaptive density feature extraction method.
The core layer of the feature extraction layer is PointNet. For using xi∈RdUnordered set of points { x }1,x2,…,xnχ → R maps a set of points into the PointNet model, as follows:
f(x1,x2,...,xn)=γ(MAXi{h(xi)})i=1,2,…,n
in the formula, R is Euclidean space; d is a distance measure; γ and h are typically MLP networks.
Compared with the prior art, the invention has the beneficial technical effects that:
the method can perform feature point regression on medical image data of different patients, and the medical image data regression performed by the method has strong robustness after simulation, training and testing. Although the characteristic point extraction result has certain errors, the method already provides a basic guiding function for the knee joint replacement surgery.
Drawings
Fig. 1 is an overall workflow diagram of the present invention.
Fig. 2 is a flowchart of an embodiment of the present invention applied to a robot-assisted knee replacement.
Fig. 3 is a data set production flow diagram.
Fig. 4 is a diagram of STL file sampling effects.
Fig. 5 is a design diagram of a feature point identification network.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The embodiments described in the figures are exemplary only and should not be construed as limiting the invention.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The word "comprising" or "comprises", and the like, means that the element or item appearing before the word covers the element or item listed after the word without excluding other elements or items.
Deep learning is the intrinsic law and expression hierarchy of learning sample data, and information obtained in the learning process is very helpful for interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds.
The robot-assisted knee joint replacement not only provides the reference information of the osteotomy and the prosthesis implantation position based on the anatomical path for a doctor, but also has a mechanical arm or a working assembly which can execute partial operation or guide the doctor to operate in a safe working range, further improves the accuracy of the knee joint replacement operation, and is a hotspot for the development of the current knee joint surgical field.
The robot-assisted knee joint replacement relates to preoperative intelligent planning, intraoperative accurate navigation and other links, and the three-dimensional medical image registration is to match preoperative planned data with intraoperative navigation images and is a key technology of a robot-assisted knee joint replacement surgery system. The intraoperative guidance and tracking can only be achieved by matching the three-dimensional image data planned before the operation to the physical coordinate system of the patient.
Due to individual differences of the bone structures and soft tissues of the knee joint, it is difficult to accurately and quickly acquire the coordinates of the knee joint feature points, which may cause misalignment or registration failure of the intraoperative patient with the navigation system. The traditional medical characteristic point marking depends on manual marking, and manual high-quality marking is tedious, time-consuming and high in subjectivity; the automatic feature point identification adopts a big data training mode based on deep learning to reduce the error of manual marking.
As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture "information" from images or multidimensional data. The information herein refers to information defined by Shannon that can be used to help make a "decision".
The scheme provided by the embodiment of the application relates to technologies such as artificial intelligence computer vision and deep learning, and is specifically explained by the following embodiments:
fig. 1 shows an overall work flow diagram of the automatic identification method for knee joint feature points of the present application. Fig. 2 is a flowchart of an embodiment of the present invention applied to a robot-assisted knee replacement. The S1 includes the following steps: extracting the image data of the patient without degenerative joint diseases such as arthritis and the like from a medical image database, and establishing a special database.
The S2 includes the following steps: as shown in fig. 3, a target tissue to be selected is segmented by selecting a specific gray value interval, a hole on a mask is manually filled, redundant pixel points are deleted, a three-dimensional calculation command is selected to establish a three-dimensional simulation model, and then Smooth and Wrap operations are performed on the three-dimensional simulation model.
The S3 includes the following steps: as shown in fig. 4, all STL files are down-sampled in a uniform down-sampling manner to obtain a knee joint three-dimensional point cloud model.
The S4 includes the following steps: and S4-1, performing data enhancement operation to enlarge the data volume of the training set. And performing random angle rotation and translation on the point cloud model of each patient to enhance the robustness of the network.
And S4-2, carrying out normalization operation on the data to accelerate network convergence.
The S5 includes the following steps: the data sets train _ x and the labels train _ y are numbered in a one-to-one correspondence with coordinate axes, and Mean Square Error (MSE) and a coefficient of membership (R-Squared) are used as evaluation indexes.
The mean square error is calculated as the mean of the square sum of the error of the corresponding sample points of the fitting data and the original data, and the smaller the value of the mean square error is, the better the fitting effect is. The coefficient value of the decision is between 0 and 1, and the closer to 1, the better the prediction effect of the model is; the closer to 0, the worse the prediction effect of the model.
The mean square error and the coefficient of the block are calculated according to the formula:
wherein m is the number of samples, yiAs an observed value, f (x)i) Is true.
The network model is shown in fig. 5, and model features are extracted through a point cloud network, and feature point positions are identified through a regression network.
Claims (8)
1. A knee joint feature point automatic identification method is characterized by comprising the following steps:
acquiring a lower limb CT of a patient to be identified;
performing three-dimensional reconstruction on a knee joint surface model according to lower limb CT image data, wherein the CT processing process is divided into three steps: threshold segmentation, editing processing and STL file generation, and performing down-sampling on all STL files by adopting a uniform down-sampling mode to obtain a knee joint three-dimensional point cloud model;
and training knee joint characteristic points according to the marked CT image labels to automatically identify the neural network model.
2. The method of claim 1, wherein the acquiring the target CT image to be identified comprises:
determining a detection area of a target CT image;
and screening qualified CT images from the medical image database.
3. The method as claimed in claim 2, further comprising, after acquiring the target CT image to be identified:
and determining that the target CT image to be identified accords with the image content rule corresponding to the identified object.
4. The method of claim 1, wherein the neural network model for automatically identifying the knee joint feature points comprises a feature extraction layer and a feature regression layer, and the feature extraction layer comprises a sampling layer, a grouping layer and a PointNet layer.
5. The method of claim 4, wherein the sampling layer selects a set of points from the input points that define a centroid of the local region, and wherein the grouping layer constructs the set of local regions by finding neighboring points of the centroid.
6. The method of claim 4, wherein the feature regression layer is comprised of multiple layers of MLP networks.
7. The method of claim 6, wherein the output of the multi-layer MLP network is coordinates of predicted feature points.
8. The method of claim 4, wherein the PointNet layer input is a collection of point cloud data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210400336.1A CN114693981A (en) | 2022-04-17 | 2022-04-17 | Automatic knee joint feature point identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210400336.1A CN114693981A (en) | 2022-04-17 | 2022-04-17 | Automatic knee joint feature point identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114693981A true CN114693981A (en) | 2022-07-01 |
Family
ID=82142115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210400336.1A Pending CN114693981A (en) | 2022-04-17 | 2022-04-17 | Automatic knee joint feature point identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114693981A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071386A (en) * | 2023-01-09 | 2023-05-05 | 安徽爱朋科技有限公司 | Dynamic segmentation method for medical image of joint disease |
CN117530772A (en) * | 2023-11-01 | 2024-02-09 | 首都医科大学附属北京积水潭医院 | Method, device, medium and equipment for processing image before shoulder joint replacement operation |
-
2022
- 2022-04-17 CN CN202210400336.1A patent/CN114693981A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071386A (en) * | 2023-01-09 | 2023-05-05 | 安徽爱朋科技有限公司 | Dynamic segmentation method for medical image of joint disease |
CN116071386B (en) * | 2023-01-09 | 2023-10-03 | 安徽爱朋科技有限公司 | Dynamic segmentation method for medical image of joint disease |
CN117530772A (en) * | 2023-11-01 | 2024-02-09 | 首都医科大学附属北京积水潭医院 | Method, device, medium and equipment for processing image before shoulder joint replacement operation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111476292B (en) | Small sample element learning training method for medical image classification processing artificial intelligence | |
WO2021068933A1 (en) | Method for automatically planning surgical path of pedicle screw on basis of deep learning network | |
CN111933251B (en) | Medical image labeling method and system | |
CN111968120B (en) | Tooth CT image segmentation method for 3D multi-feature fusion | |
CN114693981A (en) | Automatic knee joint feature point identification method | |
CN110956635A (en) | Lung segment segmentation method, device, equipment and storage medium | |
CN109146899A (en) | CT image jeopardizes organ segmentation method and device | |
AU2020101836A4 (en) | A method for generating femoral x-ray films based on deep learning and digital reconstruction of radiological image | |
JPWO2020056086A5 (en) | ||
CN108765417A (en) | It is a kind of that system and method is generated based on the femur X-ray film of deep learning and digital reconstruction irradiation image | |
CN112651969B (en) | Trachea tree hierarchical extraction method combining multi-information fusion network and regional growth | |
CN112037200A (en) | Method for automatically identifying anatomical features and reconstructing model in medical image | |
CN111402216B (en) | Three-dimensional broken bone segmentation method and device based on deep learning | |
EP3295373A1 (en) | A system and method for surgical guidance and intra-operative pathology through endo-microscopic tissue differentiation | |
CN106529188A (en) | Image processing method applied to surgical navigation | |
CN111724389B (en) | Method, device, storage medium and computer equipment for segmenting CT image of hip joint | |
CN112802073B (en) | Fusion registration method based on image data and point cloud data | |
CN108898601B (en) | Femoral head image segmentation device and method based on random forest | |
CN117237322A (en) | Organ segmentation modeling method and terminal based on medical image | |
CN116779093A (en) | Method and device for generating medical image structured report and computer equipment | |
CN116747017A (en) | Cerebral hemorrhage operation planning system and method | |
Gil et al. | Intraoperative extraction of airways anatomy in videobronchoscopy | |
CN115063457A (en) | Method, system, device and medium for automatically identifying anatomical feature points of medical images | |
CN112201329A (en) | Template-guided and data-driven three-dimensional broken bone segmentation and splicing method | |
CN109509189B (en) | Abdominal muscle labeling method and labeling device based on multiple sub-region templates |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |