CN115018977A - Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm - Google Patents

Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm Download PDF

Info

Publication number
CN115018977A
CN115018977A CN202210575650.3A CN202210575650A CN115018977A CN 115018977 A CN115018977 A CN 115018977A CN 202210575650 A CN202210575650 A CN 202210575650A CN 115018977 A CN115018977 A CN 115018977A
Authority
CN
China
Prior art keywords
coordinate system
bone
joint
image
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210575650.3A
Other languages
Chinese (zh)
Inventor
钱志辉
王胜利
任雷
王坤阳
刘翔宇
庄智强
宋广生
任露泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202210575650.3A priority Critical patent/CN115018977A/en
Publication of CN115018977A publication Critical patent/CN115018977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Computational Linguistics (AREA)
  • Computer Graphics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a biplane X-ray-based semi-automatic registration method and a joint three-dimensional motion solving algorithm, which comprise CT image processing, biplane X-ray image processing, laboratory three-dimensional environment reconstruction, semi-automatic 3D-2D registration, coordinate transformation and joint angle and joint displacement calculation. The biplane dynamic X-ray motion capture system is the latest in-vivo biological motion testing technology, can acquire dynamic images of target joints, can use a skeleton model based on CT image processing and biplane X-ray images to execute semi-automatic registration based on the model, and effectively improves the registration efficiency and precision compared with the existing manual registration technology. Based on the registered three-dimensional bone motion data, the joint three-dimensional motion solving algorithm disclosed by the invention can be used for calculating the joint angle and the joint displacement by utilizing coordinate transformation, so that the three-dimensional six-degree-of-freedom of a target joint can be analyzed, and the joint three-dimensional motion rule is analyzed.

Description

Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm
Technical Field
The invention relates to the field of three-dimensional motion capture analysis, in particular to a biplane X-ray-based semi-automatic registration method and a biplane X-ray-based semi-automatic registration data joint three-dimensional motion solving algorithm.
Background
The joint motion analysis is based on the kinematics principle and combines the anatomy, physiology and motion analysis technology to quantitatively describe the human joint motion; the analysis of the joint motion rule has important significance, and can provide important biomechanical basis and basis for the design of joint implants and the development of rehabilitation aids in the aspect of engineering; in clinical aspect, the method is also of great significance for the auxiliary diagnosis of joint diseases and the evaluation of postoperative rehabilitation.
The bones of each part of the human body are different in size and form, such as foot bones, hand bones and the like, and the movement of joints formed by the bones is represented as the change of six degrees of freedom in space, which provides a serious challenge for researching the movement of the joints of the human body.
In the early joint motion research, an in-vitro test is mostly adopted, in-vitro joint samples are manually driven or a robot driver is adopted to drive the samples to simulate specific actions, and the capture of the relevant bone motion is completed. Obviously, the in vitro test has great limitation, and the test based on the motion simulation device cannot reproduce the real motion mode of the human body joint and can not truly restore the in vivo joint motion condition; modern Motion analysis mostly adopts an Optical Motion Capture (OMC) system, and the OMC principle is that marking points are pasted on skin, the Motion of the marking points is captured by using far infrared light, and the spatial Motion characteristics of bones are analyzed through Motion data of the marking points and a human body multi-segment model; however, this method also has significant disadvantages: (1) such studies have many preconditions and simplifications to the problem of the study, such as the division of a human joint into several rigid segments. (2) Due to the influence of skin slippage, the mark point on the skin surface and the subcutaneous skeleton have relative slippage movement; therefore, the error of the research on the joint motion of the feet or hands of the human body with smaller bones and more joints is more obvious; in addition, a research is carried out on embedding a bone nail into a target bone by adopting a local anesthesia auxiliary method, arranging a marker fixedly connected with the bone nail on the body surface and analyzing joint movement by capturing the movement of the marker, and the method reduces the influence of skin slippage, but has serious invasiveness and traumatism and is difficult to finish corresponding tests for normal subjects; in recent years, medical imaging techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and X-ray have been introduced into biomechanical studies of musculoskeletal joints in vivo to enable three-dimensional quantification of the morphology and location of in vivo bones without the use of invasive markers; however, most of these techniques are limited to static or quasi-static, two-dimensional, etc.
In recent years, a biplane dynamic X-ray Imaging System (DFIS) has been developed and applied in biomechanical research; the system uses two orthogonal X-ray machines to synchronously capture the motion of a target joint to obtain a biplane dynamic X-ray image of a motion joint, and uses a 3D-2D registration technology to analyze the motion of a bone model in a three-dimensional space; the 3D-2D registration refers to reconstructing biplane X-ray system environment in a virtual three-dimensional space, including light source position and image intensifier position, and correctly matching a three-dimensional bone model with each frame biplane X-ray image based on bone contour or image gray scale, also called model-based tracking.
The DFIS can meet the in-vivo, noninvasive, three-dimensional and movement tests, and the analysis of the three-dimensional movement precision of the joint is higher. Therefore, DFIS becomes a research hotspot in the field of biological motion testing; however, the key technology of this method is 3D-2D registration, most of the existing registration technologies construct a virtual biplane environment by means of three-dimensional software Rhino, Maya, etc., perform manual frame-by-frame matching, manually draw a bone contour on an X-ray image of each frame, then move or rotate a bone model to align the contour with the image contour, while the DFIS acquisition frequency is high, and the X-ray image captured under a specific motion can reach hundreds of frames or even thousands of frames, which results in time and labor consuming for manual registration and large errors, and is a technical bottleneck for researching joint motion by using DFIS at present.
In addition, after the 3D-2D registration is completed, space 6-degree-of-freedom data of bones is generally obtained, and relative motion of joints (two adjacent bones) cannot be reflected, so that the analysis of three-dimensional motion of the joints needs to be completed by a mathematical analysis method; for the characterization and calculation of the three-dimensional motion of the joint, different studies adopt different methods, such as jcs (joint coordination systems), pharmaceutical angles, etc., so that the comparability between different study results is poor.
In view of the current 3D-2D registration technology and the current situation of joint three-dimensional motion analysis, a quick and efficient semi-automatic registration method and a joint three-dimensional motion solving algorithm with biomechanical significance are urgently needed.
Disclosure of Invention
The invention aims to provide a biplane X-ray-based semi-automatic registration method, which solves the problems of low efficiency and large error caused by manual operation of most of the existing 3D-2D registration methods, and develops a joint three-dimensional motion solving algorithm based on biplane X-ray semi-automatic registration data on the basis of the problems so as to accurately calculate the three-dimensional six-degree-of-freedom motion of a joint and analyze the three-dimensional motion characteristics of the joint.
A semi-automatic registration method based on biplane X-ray and a joint three-dimensional motion solving algorithm are disclosed, wherein the semi-automatic registration method based on biplane X-ray comprises the following steps:
s1: CT image processing comprising:
s11: creating a target area: based on the CT image of the target joint, a bone outer contour is created on an anatomical plane (a sagittal plane, a frontal plane or a cross section) aiming at a certain bone, cross section contours are drawn frame by frame, or missing frames are automatically filled by using an interpolation method after every two frames are drawn until the outer contour is created on all frames contained in the target bone on the anatomical plane, and the region contained in the bone cross section outer contour is a target region;
s12: and (3) changing pixels: changing the pixel values outside the target area into transparent colors in all frames of the created target area, and keeping the pixel values in the target area unchanged;
s13: synthesizing a three-dimensional body: synthesizing all frames after changing pixels into a three-dimensional space volume (cuboid) according to the voxel values of the CT images;
s14: customizing an anatomical coordinate system: reconstructing a three-dimensional solid skeleton model from the target region in S11, establishing an anatomical coordinate system for the solid skeleton model included in the target joint, and referring to a standard method recommended by the International Society of Biomechanics (ISB) for typical joints of the human body, such as shoulder joints, hip joints, knee joints, ankle joints, and the like;
and performing the image processing operation described in S1 on each bone contained in the target joint.
S2: processing of biplane X-ray images comprising:
s21: image splitting: the collected dynamic X-ray images of the target joint are generally image sets or videos, and the image sets or videos are split into image sequences;
s22: and (3) distortion correction: before testing, the X-ray image of the hole array grid is collected through spatial calibration, distortion correction is carried out on the original biplane X-ray image sequence based on the hole array grid file data, the distance between holes of the hole array grid image is compared with the known actual distance value through a distortion correction algorithm, a transformation matrix for correcting the image is calculated, and then the matrix is applied to all collected image sequences.
S3: reconstructing a laboratory three-dimensional environment comprising:
s31: establishing a global coordinate system: the calibration before the experiment uses a Legao cube embedded with 64 metal steel balls and 4 metal markers in special shapes, the center of the calibration cube is used as the origin of a global laboratory coordinate system, the X axis is parallel to the up-down direction of a human body, the Y axis is parallel to the front-back direction of the human body, and the Z axis is parallel to the inside-outside direction of the human body;
s32: establishing a local coordinate system: and synthesizing all frames after the pixels are changed into a three-dimensional space body (cuboid) by S13, and placing the cuboid in a reconstructed laboratory three-dimensional environment, wherein the center of the cuboid is defined as the original point of a local coordinate system, and the directions of the x axis, the y axis and the z axis of the local coordinate system are parallel to the straight lines where the length, the width and the height of the cuboid are located. During registration, along with the change of the frame number of the X-ray image, the pose of the local coordinate system is also continuously changed;
s33: determining light source and biplane image positions: embedding 64 metal steel balls and 4 special-shaped metal markers into a Legao cube, carrying out digital processing on the steel balls and the central points of the markers in an X-ray image of the cube, comparing the digital processing with the known positions of the central points, and defining linear transformation between a three-dimensional object space and a two-dimensional image plane by adopting Direct Linear Transformation (DLT) so as to determine a pose matrix of a camera (light source) and an imaging plane (biplane image), namely determining the positions of the light source and the biplane image in a global environment;
in a reconstructed laboratory three-dimensional environment, a light source emits virtual X-rays, a biplane image receives the X-rays, a three-dimensional body where a skeleton is located is only visualized to the skeleton under the action of the virtual X-rays, and the volume outside the skeleton is transparentized, so that a virtual bone model is generated.
S4: semi-automatic 3D-2D registration, comprising:
s41: add edge detection and contrast filter: in order to improve the registration accuracy, an edge detection filter and a contrast enhancement filter are applied to the bone model and the biplane image sequence, so that the bone edge strength and the image definition in the X-ray image can be enhanced to improve the contrast effect.
S42: manually registering the initial frame, and calculating similarity: in a reconstructed laboratory three-dimensional environment, selecting a frame of biplane X-ray image with relatively clear skeleton outline and gray level as an initial frame, translating and rotating a virtual bone model to ensure that the outline of the virtual bone model is completely superposed with the skeleton outline in the two X-ray images in two-dimensional views (from a light source direction and the front view of the X-ray images), the gray levels of different areas of the skeleton in the X-ray images are different, the gray levels of different areas of the virtual bone model are different under different viewing angles, calculating the matching similarity between the virtual bone model of the current frame and the X-ray image by using a downhill simplex optimization algorithm or a particle swarm optimization algorithm based on the skeleton outline and the gray level, setting a qualified similarity value, when the calculated similarity value is smaller than the qualified similarity value, completing the registration of the initial frame, otherwise, readjusting the spatial pose of the virtual bone model, and calculating the matching similarity, recording the pose of a local coordinate system corresponding to the initial frame until the calculated similarity value is smaller than the qualified similarity value;
s43: automatic registration of the entire image sequence: based on the initial frame of registration, an optimization algorithm is executed on the image sequence in stages, for example, the next 5 to 10 frames are automatically registered, along with the change of the frame number, the pose of the virtual bone model is automatically adjusted to be completely coincident with the skeleton contours in the two X-ray images, the gray scale is matched, the pose of a local coordinate system corresponding to the current frame is automatically recorded, the matching similarity is calculated in each frame, if the matching similarity is smaller than a qualified similarity value, the registration in the stage is completed, if the matching similarity of the frame is not smaller than the qualified similarity value, the pose of the virtual bone model needs to be manually adjusted, and the matching similarity is recalculated until the similarity value corresponding to each frame image in the stage is smaller than the qualified similarity value. Performing automatic registration in stages until the registration of the whole image sequence is completed, and outputting a pose matrix of the virtual bone model corresponding to each frame;
semi-automatic registration is required to be completed for each bone contained in the target joint, wherein the semi-automatic registration is S4;
the pose matrix of the virtual bone model is obtained based on the position relation between the local coordinate system and the global coordinate system, and cannot reflect the motion characteristics of the joint, and in order to enable the calculated joint angle and joint displacement to have kinematic significance and biomechanical significance, a series of coordinate transformation is needed to associate the self-defined anatomical coordinate system with the global coordinate system of the biplane environment.
The joint three-dimensional motion solving algorithm comprises the following steps:
s5: a coordinate transformation comprising:
s51: anatomical/CT coordinate system: the two bones corresponding to the target joint are respectively marked as A and B, the self-defined anatomical coordinate system is defined under the CT coordinate system, and firstly, the anatomical coordinate system A of the bone A and the anatomical coordinate system A of the bone B are established an And B an And a CT scanning coordinate system (CT). For bone a and bone B, the transformation from the anatomical coordinate system to the CT coordinate system is expressed as:
A an /CT
B an /CT
s52: CT coordinate system/local coordinate system: the local coordinate systems corresponding to the skeleton A and the skeleton B are respectively marked as L a And L b The transformation relationship from the CT coordinate system to the local coordinate systems corresponding to the bones a and B is expressed as:
CT/L a
CT/L b
s53: local/global coordinate system: the semi-automatic 3D-2D registration, as described in S4, outputs a transformation matrix from the local coordinate system to the global coordinate system, and the relationship between the local coordinate system and the global coordinate system (GC) corresponding to bone a and bone B is expressed as:
L a /GC
L b /GC
combining the transformation matrices described in S51, S52, and S53 to associate the anatomical coordinate system and the global coordinate system, wherein the transformation relationships corresponding to bone a and bone B are:
A an /GC=A an /CT×CT/L a ×L a /GC
B an /GC=B an /CT×CT/L b ×L b /GC
s6: calculating joint angles and joint displacements, comprising:
s61: and (3) extracting Euler angles: the bones A and B that make up the target joint are typically referred to as the proximal and distal bones, and the joint angle is determined by the overall transformation from the proximal to the distal, with the overall transformation matrix denoted as R joint The transformation matrix between the proximal skeletal anatomical coordinate system and the global coordinate system is denoted as R proxima1 The transformation matrix between the distal skeletal anatomic coordinate system and the global coordinate system is denoted as R distal
Figure BDA0003660524070000071
The rotation angles around the three coordinate axes are noted: theta x 、θ y And theta z According to a conversion matrix R joint The relation between each element, the Euler angle theta is calculated x 、θ y And theta z
S62: calculating joint displacement: the anatomical coordinate system origin of bone A is denoted as O 1 The origin of the anatomical coordinate system of bone B is denoted as O 2 The joint displacement is determined by converting the anatomical coordinate system origin into a global coordinate system, O 1 And O 2 The three-dimensional coordinate in the global coordinate system is marked as O 1GC And O 2GC
O 1GC =GC/L a ×L a /CT×O 1
O 2GC =GC/L b ×L b /CT×O 2
The vector between the anatomical coordinate system origins of bone a and bone B is projected onto each axis of the anatomical coordinate system of the proximal bone to quantify the translation, and the result is expressed as a movement in the direction of the three coordinate axes of the anatomical coordinate system of the proximal bone.
The invention has the following beneficial effects:
compared with the existing manual registration method, the semi-automatic registration method based on the biplane X-ray greatly improves the processing efficiency, saves the labor and the time, effectively improves the registration precision, and provides important method and technical reference for the biological three-dimensional motion image processing based on the biplane dynamic X-ray imaging system.
The joint three-dimensional motion solving algorithm provided by the invention enables the bone three-dimensional motion data based on biplane registration to be converted into six-degree-of-freedom relative motion data of the joint, so that researchers can complete the analysis of the joint three-dimensional motion with biomechanical significance.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention.
FIG. 2 is a schematic diagram of a CT image processing procedure according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a three-dimensional environment of a reconstruction laboratory according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a specific operation process of semi-automatic 3D-2D registration according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1, 2, 3 and 4, a bi-plane X-ray based semi-automatic registration method and a joint three-dimensional motion solving algorithm, the bi-plane X-ray based semi-automatic registration method includes the following steps:
s1: the CT image processing, as shown in fig. 2, includes:
s11: creating a target area: fig. 2 shows an ankle joint as an example, and a CT image of the ankle joint is acquired, and a bone outline is created for a tibia in an anatomical plane (sagittal plane, frontal plane, or transverse plane), a cross-sectional profile is drawn frame by frame, or missing frames are automatically filled in by interpolation after drawing frame by frame until an outline is created in the anatomical plane for all frames contained in the bone. The area contained by the outer contour of the tibial section is a target area;
s12: the pixel is modified. Changing the pixel values outside the target area into transparent colors in all frames of the created target area, and keeping the pixel values in the target area unchanged;
s13: synthesizing a three-dimensional body: synthesizing all frames after changing pixels into a three-dimensional space volume (cuboid) according to the voxel values of the CT images;
s14: customizing an anatomical coordinate system: reconstructing a three-dimensional solid skeleton model from the target region in S11, establishing an anatomical coordinate system for the solid skeleton model included in the target joint, and referring to a standard method recommended by the International Society of Biomechanics (ISB) for typical joints of the human body, such as shoulder joints, hip joints, knee joints, ankle joints, and the like;
and performing the image processing operation described in S1 on each bone contained in the target joint.
S2: processing of biplane X-ray images comprising:
s21: splitting an image: the collected dynamic X-ray image of the target joint is generally an image set or a video, and the image set or the video is split into an image sequence;
s22: and (3) distortion correction: before testing, the X-ray image of the hole array grid is collected through spatial calibration, distortion correction is carried out on the original biplane X-ray image sequence based on the hole array grid file data, the distance between holes of the hole array grid image is compared with the known actual distance value through a distortion correction algorithm, a transformation matrix for correcting the image is calculated, and then the matrix is applied to all collected image sequences.
S3: reconstructing a three-dimensional environment of a laboratory, as shown in fig. 3, comprising:
s31: establishing a global coordinate system: the calibration before the experiment uses a Legao cube embedded with 64 metal steel balls and 4 metal markers in special shapes, the center of the calibration cube is used as the origin of a global laboratory coordinate system, the X axis is parallel to the up-down direction of a human body, the Y axis is parallel to the front-back direction of the human body, and the Z axis is parallel to the inside-outside direction of the human body; a global coordinate system in a reconstructed laboratory three-dimensional environment as shown in fig. 3;
s32: establishing a local coordinate system: synthesizing all frames with changed pixels into a three-dimensional space body (cuboid) by S13, placing the cuboid in a reconstructed laboratory three-dimensional environment, wherein the center of the cuboid is defined as the original point of a local coordinate system, and the directions of the x, y and z axes of the local coordinate system are parallel to the straight lines where the length, width and height of the cuboid are located; the local coordinate systems of two bones in the three-dimensional environment of the reconstruction laboratory as shown in FIG. 3 are respectively L a And L b . In registration, as the number of frames of the X-ray image changes, the local position is determinedThe pose of the mark system is also changed continuously;
s33: determining light source and biplane image positions: embedding 64 metal steel balls and 4 metal markers with special shapes in a Legao cube, performing digital processing on central points of the steel balls and the markers in an X-ray image of the cube, comparing the central points with known positions of the central points, and defining linear transformation between a three-dimensional object space and a two-dimensional image plane by adopting Direct Linear Transformation (DLT), so as to determine pose matrixes of a camera (light source) and an imaging plane (biplane image), namely determining the positions of the light source and the biplane image in a global environment; FIG. 3 illustrates the relative positions of a light source and biplane images in a three-dimensional environment;
in the reconstructed three-dimensional environment of the laboratory, a light source emits virtual X-rays, a biplane image receives the X-rays, a three-dimensional body where a bone is located is only visualized to the bone under the action of the virtual X-rays, and the volume outside the bone is transparent, so that a virtual bone model is generated, as shown in fig. 3, the tibia and the talus are visualized in the reconstructed three-dimensional environment of the laboratory, and a tibial-talar joint virtual bone model is generated.
S4: semi-automatic 3D-2D registration, as shown in fig. 4, includes the steps of:
s41: add edge detection and contrast filter: in order to improve the registration accuracy, an edge detection filter and a contrast enhancement filter are applied to the virtual bone model and the biplane image sequence, so that the bone edge strength and the image definition in the X-ray image can be enhanced to improve the contrast effect;
s42: manually registering an initial frame, and calculating similarity: in a reconstructed laboratory three-dimensional environment, selecting a frame of biplane X-ray image with relatively clear skeleton outline and gray level as an initial frame, selecting an ankle joint neutral position as the initial frame as shown in FIG. 4, translating and rotating a virtual bone model to enable the outline of the virtual bone model to be completely overlapped with the skeleton outline in two-dimensional views (from a light source direction to the front view of the X-ray image), wherein the gray levels of different areas of bones in the X-ray image are different, the gray levels of different areas of the virtual bone model are different under different viewing angles, calculating the matching similarity between the virtual bone model of the current frame and the X-ray image by using a downhill simplex optimization algorithm or a particle swarm optimization algorithm based on the skeleton outline and the gray levels, setting a similarity qualification value, completing the registration of the initial frame when the calculated similarity value is smaller than the qualification similarity value, otherwise readjusting the space pose of the virtual bone model, calculating matching similarity until the calculated similarity value is smaller than the qualified similarity value, and recording the pose of a local coordinate system corresponding to the initial frame;
s43: automatic registration of the entire image sequence: based on the initial frame of registration, an optimization algorithm is performed on the image sequence in stages, for example, the next 5 to 10 frames are automatically registered, as the frame number changes, the pose of the virtual bone model is automatically adjusted to be completely coincident with the bone contour in the two X-ray images, and the intensity of the gray scale is matched, the pose of the local coordinate system corresponding to the current frame is automatically recorded, the matching similarity is calculated in each frame, if the matching similarity is less than the qualified similarity value, finishing the registration at the stage, if the frame matching similarity is not less than the qualified similarity value, manually adjusting the pose of the virtual bone model, recalculating the matching similarity until the similarity value corresponding to each frame of image at the stage is less than the qualified similarity value, executing automatic registration in stages until the registration of the whole image sequence is finished, and outputting the pose matrix of the virtual bone model corresponding to each frame;
semi-automatic registration is required to be completed for each bone contained in the target joint S4;
the pose matrix of the virtual bone model is obtained based on the position relation between the local coordinate system and the global coordinate system, and cannot reflect the motion characteristics of the joint, and in order to enable the calculated joint angle and joint displacement to have kinematic significance and biomechanical significance, a series of coordinate transformation is needed to associate the self-defined anatomical coordinate system with the global coordinate system of the biplane environment.
The joint three-dimensional motion solving algorithm comprises the following steps:
s5: a coordinate transformation comprising:
s51: anatomical/CT coordinate system: the two bones corresponding to the target joint are respectively marked as A and B, and the self-defined anatomical coordinate system is shown inDefined in the CT coordinate system, first, an anatomical coordinate system A of a bone A and a bone B is established an And B an A transformation matrix with a CT scanning coordinate system (CT); for bone a and bone B, the transformation from the anatomical coordinate system to the CT coordinate system is expressed as:
A an /CT
B an /CT
s52: CT coordinate system/local coordinate system: the local coordinate systems corresponding to the skeleton A and the skeleton B are respectively marked as L a And L b The transformation relationship from the CT coordinate system to the local coordinate systems corresponding to the bones a and B is expressed as:
CT/L a
CT/L b
s53: local/global coordinate system: the semi-automatic 3D-2D registration, as described in S4, outputs a transformation matrix from the local coordinate system to the global coordinate system, and the relationship between the local coordinate system and the global coordinate system (GC) corresponding to bone a and bone B is expressed as:
L a /GC
L b /GC
combining the transformation matrices described in S51, S52, and S53 to associate the anatomical coordinate system and the global coordinate system, wherein the transformation relationships corresponding to bone a and bone B are:
A an /GC=A an /CT×CT/L a ×L a /GC
B an /GC=B an /CT×CT/L b ×L b /GC
s6: calculating joint angles and joint displacements, comprising:
s61: and (3) extracting Euler angles: the bones A and B that make up the target joint are typically referred to as the proximal and distal bones, and the joint angle is determined by the overall transformation from the proximal to the distal, with the overall transformation matrix denoted as R joint The transformation matrix between the proximal skeletal anatomical coordinate system and the global coordinate system is denoted as R proximal The transformation matrix between the distal skeletal anatomical coordinate system and the global coordinate system is denoted as R distal
Figure BDA0003660524070000131
The rotation angles around the three coordinate axes are noted: theta x 、θ y And theta z According to a conversion matrix R joint The relation between each element, the Euler angle theta is calculated x 、θ y And theta z
S62: calculating joint displacement: the origin of the anatomical coordinate system of bone A is denoted as O 1 The origin of the anatomical coordinate system of bone B is denoted as O 2 The joint displacement is determined by converting the anatomical coordinate system origin into a global coordinate system, O 1 And O 2 The three-dimensional coordinates in the global coordinate system are denoted as O 1GC And O 2GC
O 1GC =GC/L a ×L a /CT×O 1
O 2GC =GC/L b ×L b /CT×O 2
The vector between the origin of the anatomical coordinate systems of bone a and bone B is projected onto each axis of the anatomical coordinate system of the proximal bone to quantify the translation, the result being expressed as a movement in the direction of the three coordinate axes of the anatomical coordinate system of the proximal bone.
Compared with the existing manual registration method, the semi-automatic registration method based on the biplane X-ray greatly improves the processing efficiency, saves the labor and the time, effectively improves the registration precision, and provides important method and technical reference for the biological three-dimensional motion image processing based on the biplane dynamic X-ray imaging system.
The joint three-dimensional motion solving algorithm provided by the invention enables the bone three-dimensional motion data based on biplane registration to be converted into six-degree-of-freedom relative motion data of the joint. So that researchers can complete the analysis of the three-dimensional movement of the joint with biomechanics significance.

Claims (1)

1. The method is characterized in that the method comprises the following steps of:
the semi-automatic registration method based on the biplane X-ray comprises the following steps:
s1: CT image processing comprising:
s11: creating a target area: based on the CT image of the target joint, a bone outer contour is created on an anatomical plane (a sagittal plane, a frontal plane or a cross section) aiming at a certain bone, cross section contours are drawn frame by frame, or missing frames are automatically filled by using an interpolation method after every two frames are drawn until the outer contour is created on all frames contained in the target bone on the anatomical plane, and the region contained in the bone cross section outer contour is a target region;
s12: changing pixels: changing the pixel values outside the target area into transparent colors in all frames of the created target area, and keeping the pixel values in the target area unchanged;
s13: synthesizing a three-dimensional body: synthesizing all frames after changing pixels into a three-dimensional space volume according to the voxel values of the CT images;
s14: customizing an anatomical coordinate system: reconstructing a three-dimensional solid skeleton model from the target region in S11, establishing an anatomical coordinate system for the solid skeleton model included in the target joint, and referring to a standard method recommended by the International Society of Biomechanics (ISB) for typical joints of the human body, such as shoulder joints, hip joints, knee joints and ankle joints;
performing the image processing operation of S1 on each bone contained in the target joint;
s2: processing of biplane X-ray images comprising:
s21: splitting an image: the collected dynamic X-ray image of the target joint is generally an image set or a video, and the image set or the video is split into an image sequence;
s22: and (3) distortion correction: before testing, carrying out spatial calibration on an X-ray image of a collected hole array grid, carrying out distortion correction on an original biplane X-ray image sequence based on hole array grid file data, comparing the spacing between holes of the hole array grid image with a known actual spacing value by using a distortion correction algorithm, calculating a transformation matrix for correcting the image, and applying the matrix to all collected image sequences;
s3: reconstructing a laboratory three-dimensional environment comprising:
s31: establishing a global coordinate system: the calibration before the experiment uses a Legao cube embedded with 64 metal steel balls and 4 metal markers in special shapes, the center of the calibration cube is used as the origin of a global laboratory coordinate system, the X axis is parallel to the up-down direction of a human body, the Y axis is parallel to the front-back direction of the human body, and the Z axis is parallel to the inside-outside direction of the human body;
s32: establishing a local coordinate system: synthesizing all frames after the pixels are changed into a three-dimensional space body by S13, placing a cuboid in a reconstructed laboratory three-dimensional environment, wherein the center of the cuboid is defined as the original point of a local coordinate system, and the directions of the x, y and z axes of the local coordinate system are parallel to the straight line where the length, width and height of the cuboid are located; during registration, along with the change of the frame number of the X-ray image, the pose of the local coordinate system is also continuously changed;
s33: determining light source and biplane image positions: embedding 64 metal steel balls and 4 metal markers with special shapes in a Legao cube, performing digital processing on central points of the steel balls and the markers in an X-ray image of the cube, comparing the central points with known positions of the central points, and defining linear transformation between a three-dimensional object space and a two-dimensional image plane by adopting Direct Linear Transformation (DLT), so as to determine a pose matrix of a camera and an imaging plane, namely determining the positions of a light source and a biplane image under a global environment;
in a reconstructed laboratory three-dimensional environment, a light source emits virtual X-rays, a biplane image receives the X-rays, a three-dimensional body where a skeleton is located is only visualized to the skeleton under the action of the virtual X-rays, and the volume outside the skeleton is transparentized, so that a virtual bone model is generated;
s4: semi-automatic 3D-2D registration, comprising:
s41: add edge detection and contrast filter: in order to improve the registration accuracy, an edge detection filter and a contrast enhancement filter are applied to a bone model and a biplane image sequence, so that the bone edge strength and the image definition in an X-ray image can be enhanced to improve the contrast effect;
s42: manually registering the initial frame, and calculating similarity: in a reconstructed laboratory three-dimensional environment, selecting a frame of biplane X-ray image with relatively clear skeleton outline and gray level as an initial frame, translating and rotating a virtual bone model to ensure that the outline of the virtual bone model is completely superposed with the skeleton outline in two X-ray images in two-dimensional views, the gray levels of different areas of the skeleton in the X-ray images are different, the gray levels of different areas of the virtual bone model are also different under different viewing angles, calculating the matching similarity between the virtual bone model of a current frame and the X-ray image by using a downhill simplex optimization algorithm or a particle swarm optimization algorithm based on the skeleton outline and the gray level, setting a qualified similarity value, finishing the registration of the initial frame when the calculated similarity value is less than the qualified similarity value, otherwise, readjusting the spatial pose of the virtual bone model, and calculating the matching similarity until the calculated similarity value is less than the qualified similarity value, recording the pose of a local coordinate system corresponding to the initial frame;
s43: automatic registration of the entire image sequence: based on the initial frame of registration, performing an optimization algorithm on the image sequence in stages, for example, automatically registering the next 5 to 10 frames, automatically adjusting the pose of the virtual bone model along with the change of the frame number to ensure that the pose of the virtual bone model is completely coincident with the skeleton contours in the two X-ray images and the gray scale is matched, automatically recording the pose of a local coordinate system corresponding to the current frame, calculating the matching similarity in each frame, finishing the registration in the stage if the matching similarity is less than a qualified similarity value, manually adjusting the pose of the virtual bone model if the matching similarity of the frame is not less than the qualified similarity value, and recalculating the matching similarity until the similarity value corresponding to each frame image in the stage is less than the qualified similarity value; performing automatic registration in stages until the registration of the whole image sequence is completed, and outputting a pose matrix of the virtual bone model corresponding to each frame;
semi-automatic registration is required to be completed for each bone contained in the target joint, wherein the semi-automatic registration is S4;
the pose matrix of the virtual bone model is obtained based on the position relation between a local coordinate system and a global coordinate system, the pose matrix cannot reflect the motion characteristics of the joint, a series of coordinate transformation is needed in order to enable the calculated joint angle and joint displacement to have kinematic significance and biomechanical significance, and the self-defined anatomical coordinate system is associated with the global coordinate system of the biplane environment;
the joint three-dimensional motion solving algorithm comprises the following steps:
s5: a coordinate transformation comprising:
s51: anatomical/CT coordinate system: the two bones corresponding to the target joint are respectively marked as A and B, the self-defined anatomical coordinate system is defined under the CT coordinate system, and firstly, the anatomical coordinate system A of the bone A and the anatomical coordinate system A of the bone B are established an And B an A transformation matrix with a CT scanning coordinate system (CT); for bone a and bone B, the transformation from the anatomical coordinate system to the CT coordinate system is expressed as:
A an /CT
B an /CT
s52: CT coordinate system/local coordinate system: the local coordinate systems corresponding to the bones A and B are respectively marked as La and Lb, and the conversion relationship from the CT coordinate system to the local coordinate systems corresponding to the bones A and B is expressed as follows:
CT/L a
CT/L b
s53: local/global coordinate system: the semi-automatic 3D-2D registration, as described in S4, outputs a transformation matrix from the local coordinate system to the global coordinate system, and the relationship between the local coordinate system and the global coordinate system (GC) corresponding to bone a and bone B is expressed as:
L a /GC
L b /GC
combining the transformation matrices described in S51, S52, and S53 to associate the anatomical coordinate system and the global coordinate system, wherein the transformation relationships corresponding to bone a and bone B are:
A an /GC=A an /CT×CT/L a ×L a /GC
B an /GC=B an /CT×CT/L b ×L b /GC
s6: calculating joint angles and joint displacements, comprising:
s61: and (3) extracting Euler angles: bone constituting target jointSkeleton A and skeleton B refer generally to the proximal and distal bones, and the joint angle is determined by the overall transformation from the proximal to the distal, denoted as R joint The transformation matrix between the proximal skeletal anatomical coordinate system and the global coordinate system is denoted as R proximal The transformation matrix between the distal skeletal anatomical coordinate system and the global coordinate system is denoted as R distal
Figure FDA0003660524060000051
The rotation angles around the three coordinate axes are noted: theta x 、θ y And theta z According to a conversion matrix R joint The relation between each element, the Euler angle theta is calculated x 、θ y And theta z
S62: calculating joint displacement: the origin of the anatomical coordinate system of the proximal bone is denoted as O 1 The origin of the anatomical coordinate system of the distal bone is denoted as O 2 The joint displacement is determined by converting the anatomical coordinate system origin into a global coordinate system, O 1 And O 2 The three-dimensional coordinates in the global coordinate system are denoted as O 1GC And O 2GC
O 1GC =GC/L a ×L a /CT×O 1
O 2GC =GC/L b ×L b /CT×O 2
The vector between the origin of the anatomical coordinate systems of bone a and bone B is projected onto each axis of the anatomical coordinate system of the proximal bone to quantify the translation, the result being expressed as a movement in the direction of the three coordinate axes of the anatomical coordinate system of the proximal bone.
CN202210575650.3A 2022-05-25 2022-05-25 Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm Pending CN115018977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210575650.3A CN115018977A (en) 2022-05-25 2022-05-25 Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210575650.3A CN115018977A (en) 2022-05-25 2022-05-25 Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm

Publications (1)

Publication Number Publication Date
CN115018977A true CN115018977A (en) 2022-09-06

Family

ID=83069902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210575650.3A Pending CN115018977A (en) 2022-05-25 2022-05-25 Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm

Country Status (1)

Country Link
CN (1) CN115018977A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132747A (en) * 2023-10-25 2023-11-28 北京爱康宜诚医疗器材有限公司 Bone resetting method and device based on bone model
CN117671162A (en) * 2024-02-01 2024-03-08 江苏一影医疗设备有限公司 Three-dimensional imaging method and device for 4D joint standing position
CN117764541A (en) * 2024-02-22 2024-03-26 湖南必和必拓科技发展有限公司 interactive factory management system based on three-dimensional visualization technology

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132747A (en) * 2023-10-25 2023-11-28 北京爱康宜诚医疗器材有限公司 Bone resetting method and device based on bone model
CN117132747B (en) * 2023-10-25 2024-03-19 北京爱康宜诚医疗器材有限公司 Bone resetting method and device based on bone model
CN117671162A (en) * 2024-02-01 2024-03-08 江苏一影医疗设备有限公司 Three-dimensional imaging method and device for 4D joint standing position
CN117671162B (en) * 2024-02-01 2024-04-19 江苏一影医疗设备有限公司 Three-dimensional imaging method and device for 4D joint standing position
CN117764541A (en) * 2024-02-22 2024-03-26 湖南必和必拓科技发展有限公司 interactive factory management system based on three-dimensional visualization technology

Similar Documents

Publication Publication Date Title
CN115018977A (en) Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm
US7804998B2 (en) Markerless motion capture system
CN110033465B (en) Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image
Gatesy et al. Scientific rotoscoping: a morphology‐based method of 3‐D motion analysis and visualization
US7117026B2 (en) Physiological model based non-rigid image registration
EP2534641B1 (en) Method for obtaining a three-dimensional reconstruction from one or more projective views and use thereof
Zhang et al. 3-D reconstruction of the spine from biplanar radiographs based on contour matching using the hough transform
JP6806655B2 (en) Radiation imaging device, image data processing device and image processing program
CN109419526A (en) Method and system for locomotion evaluation and correction in the synthesis of digital breast tomography
Postolka et al. Evaluation of an intensity-based algorithm for 2D/3D registration of natural knee videofluoroscopy data
AU2018301580B2 (en) Three-dimensional ultrasound image display method
CN115153835A (en) Acetabular prosthesis placement guide system and method based on feature point registration and augmented reality
CN107204045A (en) Virtual endoscope system based on CT images
US9355454B2 (en) Automatic estimation of anatomical extents
JP5177606B1 (en) Three-dimensional ultrasonic image creation method and program
EP4141799A1 (en) Method for obtaining a ct-like representation and virtual x-ray images in arbitrary views from a two-dimensional x-ray image
Haque et al. Hierarchical model-based tracking of cervical vertebrae from dynamic biplane radiographs
CN111466952B (en) Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image
US20220175457A1 (en) Endoscopic image registration system for robotic surgery
KR20080109379A (en) Method of registration 3d/2d model for measurement of joint kinematics
US11406471B1 (en) Hand-held stereovision system for image updating in surgery
Xiao et al. Mouse whole-body organ mapping by non-rigid registration approach
CN111184535A (en) Handheld unconstrained scanning wireless three-dimensional ultrasound real-time voxel imaging system
Shim et al. 2D-3D registration for 3D analysis of lower limb alignment in a weight-bearing condition
Fujinuma et al. Evaluation of tibiofibular joint alignment in ankle osteoarthritis based on 3D bone thickness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination