CN117671162B - Three-dimensional imaging method and device for 4D joint standing position - Google Patents

Three-dimensional imaging method and device for 4D joint standing position Download PDF

Info

Publication number
CN117671162B
CN117671162B CN202410139598.6A CN202410139598A CN117671162B CN 117671162 B CN117671162 B CN 117671162B CN 202410139598 A CN202410139598 A CN 202410139598A CN 117671162 B CN117671162 B CN 117671162B
Authority
CN
China
Prior art keywords
joint
data
standing position
dimensional
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410139598.6A
Other languages
Chinese (zh)
Other versions
CN117671162A (en
Inventor
王嘉舜
奚岩
陈阳
李巍
周一新
唐浩
袁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yiying Information Technology Co ltd
Jiangsu Yiying Medical Equipment Co ltd
Original Assignee
Shanghai Yiying Information Technology Co ltd
Jiangsu Yiying Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yiying Information Technology Co ltd, Jiangsu Yiying Medical Equipment Co ltd filed Critical Shanghai Yiying Information Technology Co ltd
Priority to CN202410139598.6A priority Critical patent/CN117671162B/en
Publication of CN117671162A publication Critical patent/CN117671162A/en
Application granted granted Critical
Publication of CN117671162B publication Critical patent/CN117671162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention belongs to the technical field of computer image processing, and provides a three-dimensional imaging method and device for a 4D joint standing position, wherein the method comprises the following steps: acquiring a motion two-dimensional image sequence of a joint preset by a user and three-dimensional reconstruction data of a standing position; training a first network model for bone segmentation of standing three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT data set; performing skeleton segmentation on the standing position three-dimensional reconstruction data through the first network model to obtain standing position skeleton segmentation data of the preset joint; dividing the motion two-dimensional image sequence of the preset joint by using a second network model to obtain motion skeleton division data of the preset joint; calculating transformation parameters corresponding to each frame of the moving skeleton segmentation data of the preset joint by a 2D-3D registration method; and calculating standing position three-dimensional imaging data based on the standing position skeleton segmentation data of the preset joint and the transformation parameters to generate 4D standing position three-dimensional imaging.

Description

Three-dimensional imaging method and device for 4D joint standing position
Technical Field
The invention relates to the technical field of computer image processing, in particular to a three-dimensional imaging method and device for a 4D joint standing position.
Background
With the development of national economy and the improvement of living standards, more and more people begin to pay attention to bone health. The conventional orthopedics diagnosis mostly uses DR or CT and spiral CT to check, but the two-dimensional image shot by DR equipment is difficult to intuitively and clearly reflect the three-dimensional state of bones, CT or spiral CT cannot be frequently used due to the large radiation dose, and common CT and spiral CT are limited by the equipment structure, so that three-dimensional imaging under standing position (loading position) is difficult to realize.
At present, static bone three-dimensional modeling such as EOS is mainly relied on in clinical joint analysis, or dynamic CBCT-DR two-dimensional image sequences such as a Konika X-ray machine and Siemens Multitom Rax are observed, so that the dynamic movement condition of bones at joints is difficult to comprehensively and intuitively observe.
Disclosure of Invention
The invention aims to solve the problems, and is realized by the following technology:
In some embodiments, the present invention provides a method of three-dimensional imaging of a 4D joint stance comprising:
acquiring a motion two-dimensional image sequence of a joint preset by a user and three-dimensional reconstruction data of a standing position;
training a first network model for bone segmentation of standing three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT data set;
performing skeleton segmentation on the standing position three-dimensional reconstruction data through the first network model to obtain standing position skeleton segmentation data of the preset joint;
Dividing the motion two-dimensional image sequence of the preset joint by using a second network model to obtain motion skeleton division data of the preset joint;
Calculating transformation parameters corresponding to each frame of the moving skeleton segmentation data of the preset joint by a 2D-3D registration method;
And calculating standing position three-dimensional imaging data based on the standing position skeleton segmentation data of the preset joint and the transformation parameters to generate 4D standing position three-dimensional imaging.
In some embodiments, the bone segmentation of the standing three-dimensional reconstruction data by the first network model to obtain standing bone segmentation data of the preset joint includes:
Performing voxel level segmentation on each bone in the standing position three-dimensional reconstruction data through the first network model to obtain standing position bone segmentation data of the preset joint; the preset joints comprise knee joints and hip joints;
Wherein the standing position skeleton segmentation data of the knee joint comprises femur, patella, tibia and fibula; the standing skeletal segmentation data of the hip joint includes a hip bone and a femur.
In some embodiments, the segmenting the moving two-dimensional image sequence of the preset joint using the second network model to obtain moving skeleton segmentation data of the preset joint includes:
Dividing the motion two-dimensional image sequence of the preset joint through the second network model to obtain motion skeleton division data of the preset joint;
the second network model is a depth target detection network model, and the moving bone segmentation data comprises a bone region, a soft tissue region and an open beam region.
In some embodiments, after the segmenting the motion two-dimensional image sequence of the preset joint using the second network model to obtain the motion skeleton segmentation data of the preset joint, the method further includes:
and carrying out edge extraction on the motion skeleton segmentation data of the preset joint.
In some embodiments, the calculating, by using a 2D-3D registration method, transformation parameters corresponding to each frame of the moving bone segmentation data of the preset joint includes:
When the preset joint is a knee joint, calculating a rotation angle of bones in the knee joint by using motion bone segmentation data of the knee joint;
Rotating the standing bone segmentation data based on the rotation angle of bones in the knee joint, and performing simulated projection based on scanning parameters of a standing three-dimensional CT device;
and registering the femur, the tibia, the fibula and the patella in the knee joint in sequence based on a template matching method to obtain translation parameters of each bone in the knee joint.
In some embodiments, the calculating, by using a 2D-3D registration method, transformation parameters corresponding to each frame of the moving bone segmentation data of the preset joint includes:
when the preset joint is a hip joint, detecting the femur edge in the edge image of the motion DR sequence through Hough transformation and obtaining the rotation angle of the femur in the hip joint;
Rotating the standing position skeleton segmentation data based on the rotation angle of the femur in the hip joint, and performing simulated projection based on the scanning parameters of the standing position three-dimensional CT equipment to obtain a simulated projection image;
And calculating the rotation center of the femur in the simulated projection image, and obtaining the translation parameter of the femur according to the displacement difference between the motion two-dimensional image and the rotation center of the femur in the simulated projection image.
In some embodiments, the calculating standing three-dimensional imaging data based on the standing bone segmentation data of the preset joint and the transformation parameters includes:
When the preset joint is a knee joint, performing rigid transformation on the standing position skeleton segmentation data of the knee joint based on the transformation parameters; the rigid transformation of each bone in the knee joint involves three degrees of freedom: translation of the three-dimensional data along the x, z axes and rotation about the y axis;
Wherein the rotation angle about the y-axis is the rotation angle of the bone in the knee joint; the translation parameters along the x, z axes are the translation parameters of each bone in the knee joint.
In some embodiments, the calculating standing three-dimensional imaging data based on the standing bone segmentation data of the preset joint and the transformation parameters includes:
when the preset joint is a hip joint, performing rigid transformation on the standing position skeleton segmentation data of the hip joint based on the transformation parameters;
The rigid transformation of the femur in the hip joint involves three degrees of freedom: translation of the three-dimensional data along the x, z axes and rotation about the y axis; the rotation angle around the y-axis is the rotation angle of the bone in the hip joint; the translation parameters along the x and z axes are the translation parameters of each bone in the hip joint;
the rigid transformation of the hip bone in the hip joint involves two degrees of freedom: translation of the three-dimensional data along the x, z axes; and the translation parameters along the x and z axes are the sum of displacement difference of the femur rotation center in the two-dimensional moving image and the simulated projection image and the translation of bones in the horizontal direction according to the depth learning segmentation template of the moving two-dimensional image sequence.
In some embodiments, before training the first network model for bone segmentation of standing three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT dataset, further comprising:
Isotropic resampling, ROI clipping of the preset joint, filling and voxel-level bone data labeling are performed on the internal CT dataset;
Projection reconstruction is performed under the scan parameters of a standing position three-dimensional CT apparatus using a simulated CBCT procedure to migrate the data set of spiral CT to the data set of CBCT style.
In some embodiments, the present invention further provides a three-dimensional imaging device for 4D joint standing position, and a method for applying the three-dimensional imaging device for 4D joint standing position, including:
the data acquisition module is used for acquiring a motion two-dimensional image sequence of a joint preset by a user and three-dimensional reconstruction data of a standing position;
the model training module is used for training a first network model for performing skeleton segmentation on the standing position three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT data set;
the first segmentation module is used for carrying out skeleton segmentation on the standing position three-dimensional reconstruction data through the first network model to obtain standing position skeleton segmentation data of the preset joint;
the second segmentation module is used for segmenting the motion two-dimensional image sequence of the preset joint by using a second network model to obtain motion skeleton segmentation data of the preset joint;
The registration calculation module is used for calculating transformation parameters corresponding to each frame of the motion skeleton segmentation data of the preset joint through a 2D-3D registration method;
And the four-dimensional imaging module is used for calculating standing position three-dimensional imaging data based on the standing position skeleton segmentation data of the preset joint and the transformation parameters so as to generate 4D standing position three-dimensional imaging.
The 4D joint standing position three-dimensional imaging method and device provided by the invention have at least the following beneficial effects:
The 4D joint standing position three-dimensional imaging method enables doctors to intuitively analyze the three-dimensional movement condition of the joint model under the physiological loading state, provides more valuable information for medical diagnosis and treatment schemes, and has higher clinical value for orthopedic diagnosis, sports rehabilitation and postoperative diagnosis.
Drawings
The above features, technical features, advantages and implementation manners of a method and apparatus for three-dimensional imaging of standing position of a 4D joint will be further described with reference to the accompanying drawings in a clearly understood manner.
FIG. 1 is a schematic diagram of one embodiment of a method for three-dimensional imaging of a 4D joint standing position in accordance with the present invention;
FIG. 2 is a schematic overall flow chart of the present invention for processing knee joint data;
FIG. 3 is a schematic diagram of a 2D-3D registration process for processing knee joint data in accordance with the present invention;
FIG. 4 is a graph showing the results of 4D imaging of knee joint data processed in accordance with the present invention;
FIG. 5 is a schematic diagram of the overall flow of processing hip joint data in accordance with the present invention;
FIG. 6 is a schematic diagram of a 2D-3D registration process for processing hip joint data in accordance with the present invention;
Fig. 7 is a schematic representation of the results of 4D imaging of hip data processed in accordance with the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
For the sake of simplicity of the drawing, the parts relevant to the present invention are shown only schematically in the figures, which do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In addition, in the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain the specific embodiments of the present invention with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the invention, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
In one embodiment, as shown in fig. 1, the present invention provides a three-dimensional imaging method for a standing position of a 4D joint, comprising:
s101, acquiring a motion two-dimensional image sequence of a joint preset by a user and three-dimensional reconstruction data of a standing position.
Wherein the motion two-dimensional image sequence comprises a plurality of two-dimensional image sequences acquired by a two-dimensional imaging system (such as DR) at a preset joint during user motion. The standing position three-dimensional reconstruction data includes standing position three-dimensional reconstruction data at a preset joint acquired by a three-dimensional imaging system (such as CBCT).
Digital radiography (digital radiography, DR) is also a new digital imaging technique developed for similar use as Computed Radiography (CR), but with different basic principles and structures. DR developed based on digital fluorescence photography (digital fluorography, DF) uses image intensifier tube as information carrier to receive X-ray information transmitted through human body, and then is converted into digital signal after being collected by video camera, and then digitized. The differences from the CR are that, apart from the differences in the information carrier, no special equipment is required for matching with other equipment, and the CR can acquire information on any X-ray imaging device using an imaging plate. DR, like CR, can perform various post-processing of images, and can perform image transmission and storage. DR, it is made up of X-ray generator, X-ray detector, image processor and image display. The imaging principle is that X-ray passes through human body and is detected by X-ray detector, X-ray is absorbed by human body when passing through human body, X-ray is attenuated when passing through human body, finally, X-ray signal is converted into electric signal and then into digital image signal.
CBCT (cone-beam computed tomography) (cone-beam Beam Computed Tomography) is the first choice for standing three-dimensional imaging because of its advantages of fast imaging speed and low radiation dose. The standing position CBCT equipment not only can support the three-dimensional imaging of the loading position under the static state, but also can support the dynamic DR two-dimensional imaging due to the advantage of high imaging speed, and has extremely high clinical value in the fields of orthopedics diagnosis, sports rehabilitation, postoperative diagnosis and the like.
Specifically, the following requirements are imposed on the input data (the motion two-dimensional image sequence of the joint preset by the user and the standing position three-dimensional reconstruction data): the imported motion DR sequence (which may be referred to as a motion two-dimensional image sequence of a joint preset by a user) and standing CBCT reconstruction data (which may be referred to as standing three-dimensional reconstruction data) are derived from a CBCT scan of the standing CBCT device at one joint of the same patient under the same scan parameters and a group of not less than 20 side CBCT-DR sequences photographed when the patient performs joint motion, wherein the specific joint means a knee joint or a hip joint.
S102, training a first network model for bone segmentation of standing three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT data set.
Specifically, the internal CT data set is lower limb full-length spiral CT reconstruction data. The internal CT dataset is a lower limb full-length helical CT dataset, "internal" means that the dataset is not a public dataset, but a hospital provided study dataset.
The spiral CT scan is to make the X-ray bulb tube rotate continuously around the human body and make the scanning bed enter the bed automatically at constant speed and horizontally, so that the scanning line is spiral on the surface of the patient. Unlike conventional CT scanning, the helical scanning obtains continuous-layer information, which is information of all tissues in the scanning range, so that the helical scanning is also called volume data, and the defect that small focus is easily missed due to respiratory motion in the tomographic scanning is avoided. Spiral CT is classified into single-layer spiral CT and multi-layer spiral CT according to the difference of the detectors. The digital flat panel detector has obviously raised scanning speed and thus high resolution scanning of fast thin layer with great coverage.
S103, performing skeleton segmentation on the standing position three-dimensional reconstruction data through the first network model to obtain standing position skeleton segmentation data of the preset joint.
And S104, segmenting the motion two-dimensional image sequence of the preset joint by using a second network model to obtain motion skeleton segmentation data of the preset joint.
Before the deep learning technology appears, one problem that is difficult to avoid in the machine vision algorithm is how to extract available features in the image, and manually extracting features often causes limitations of the algorithm, which is mainly reflected in two aspects: the extraction of artificial features often depends on professional experience, so that the extraction of the features is incomplete or the extracted features are poor in effectiveness; the manual feature fusion mode is limited, so that the extracted feature information cannot be effectively utilized. The deep learning algorithm can autonomously learn how to extract features and what kind of features are extracted in a large number of training processes, and a plurality of features are fused to obtain a final prediction result, which is one reason why the deep learning can enlarge the splendid colors in the machine vision field. The bone segmentation method used in the present invention relies on the development of deep learning techniques.
S105, calculating transformation parameters corresponding to each frame of the motion skeleton segmentation data of the preset joint through a 2D-3D registration method.
Wherein, 2D-3D registration method: specifically defined as: for knee joint data, firstly, according to a DR sequence segmentation result, the slope of a bone edge is approximate to the rotation angle of a bone, then, simulation projection is carried out on standing position CBCT bone data according to the scanning parameters of real standing position CBCT equipment, and a 2D-3D registration task is converted into 2D-2D registration of a motion DR sequence and the simulation projection; after the rotation is completed, registering the femur, the tibiofibula and the patella in sequence by using a method based on template matching (MATCH TEMPLATE), wherein an optimization index is NCC, and the registration ROI of the DR image is reduced by using the registration result of the previous bone so as to improve the registration accuracy and obtain the translation parameter of each bone;
For hip data, the motion DR sequence is processed using deep learning in combination with conventional methods, specifically in combination: after the DR sequence segmentation result is obtained, canny edge detection is carried out on the original DR sequence, and because the Canny edge detection result contains a large number of irrelevant edges and noise, the DR sequence segmentation result in the step 4 is used as prior information to filter irrelevant edges which do not belong to a bone region in the segmentation result in an edge image. The femoral edges in the edge images of the moving DR sequence are then detected in three steps by Hough Transform (Hough Transform) using a coarse-to-fine strategy: firstly, detecting long straight lines in a motion DR sequence to determine the position of a femur rotation center in a DR image, then detecting short straight lines, close to the femur rotation center, in the DR image frame by frame to obtain the femur rotation angle, for two detection, not finding a special frame with a proper femur angle, and using the femur angle of an adjacent frame to conduct interpolation to serve as the femur angle of the frame. After the rotation is completed, the priori knowledge in the CBCT data is used for calculating the rotation center of the femur in the simulated projection, and the translation parameter of the femur is obtained according to the displacement difference between the real DR image and the rotation center of the femur in the simulated projection image. And finally, calculating tiny translation of bones in the horizontal direction according to the deep learning segmentation template of the motion DR sequence, and obtaining more accurate transformation parameters.
S106, based on the standing bone segmentation data of the preset joint and the transformation parameters, calculating standing three-dimensional imaging data to generate 4D standing three-dimensional imaging.
The invention provides a 4D standing position CBCT imaging method based on motion DR sequence registration, which enables doctors to intuitively analyze three-dimensional motion conditions of a joint model under a physiological loading state.
Before training the first network model for bone segmentation of the standing three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT dataset, the method further comprises:
Isotropic resampling, ROI clipping of the preset joint, filling and voxel-level bone data labeling are performed on the internal CT dataset; projection reconstruction is performed under the scan parameters of a standing position three-dimensional CT apparatus using a simulated CBCT procedure to migrate the data set of spiral CT to the data set of CBCT style.
Illustratively, isotropic resampling, ROI cropping and filling at specific joints, and bone data labeling are performed in a preprocessing procedure of the internal CT dataset, and projection reconstruction is performed under scan parameters of a true standing position CBCT device using a simulated CBCT procedure to migrate the internal CT dataset to CBCT style data.
In one embodiment, the segmenting the moving two-dimensional image sequence of the preset joint by using the second network model to obtain moving skeleton segmentation data of the preset joint includes:
Dividing the motion two-dimensional image sequence of the preset joint through the second network model to obtain motion skeleton division data of the preset joint;
the second network model is a depth target detection network model, and the moving bone segmentation data comprises a bone region, a soft tissue region and an open beam region.
Illustratively, the second network model includes a deep object detection network model such as XNet, XNet is a convolutional neural network for segmenting an X-ray image into bone, soft tissue, and open beam regions, which is used to distinguish between bone regions, soft tissue regions, and exposure regions (open beam regions) of a moving DR sequence.
In one embodiment, after the segmenting the moving two-dimensional image sequence of the preset joint by using the second network model, obtaining moving skeleton segmentation data of the preset joint, the method further includes:
and carrying out edge extraction on the motion skeleton segmentation data of the preset joint.
Specifically, the traditional method is used for extracting the edges of the imported motion DR sequence, and further, the used skeleton edge extraction method is a Canny edge detection algorithm, and the specific implementation steps are as follows: firstly, removing noise in an image by using a 5x5 Gaussian filter kernel, and then detecting edges in the image by using a Sobel operator, wherein the calculation formula of the Sobel operator is as follows:
The convolution templates d x and d y are applied to the x direction and the y direction of the image respectively, and then the gradient amplitude and the gradient direction are calculated. The Sobel operator detects a large number of irrelevant edges and noise, and finally eliminates irrelevant boundaries by using a non-maximum suppression and double-threshold method.
In one embodiment, the bone segmentation of the standing three-dimensional reconstruction data by the first network model to obtain standing bone segmentation data of the preset joint includes:
Performing voxel level segmentation on each bone in the standing position three-dimensional reconstruction data through the first network model to obtain standing position bone segmentation data of the preset joint; the preset joints comprise knee joints and hip joints;
Wherein the standing position skeleton segmentation data of the knee joint comprises femur, patella, tibia and fibula; the standing skeletal segmentation data of the hip joint includes a hip bone and a femur.
Specifically, the segmentation effect of the trained network model is to segment each block of bones of the standing position CBCT reconstruction data of a specific joint at a voxel level, wherein the knee joint data is divided into: 1) femur, 2) patella, 3) tibia and fibula; hip joint data are divided into: 1) Hip bone, 2) femur.
Referring now to fig. 2, the method of the present invention may be referred to as a 4D standing CBCT imaging method based on motion DR sequence registration, by way of example, by first beginning to read an internal CT dataset, which specifically includes: 3D CT data, 3D mask data, 2D DR motion sequence, projection geometry parameters. And rotating the 3D CT data and the 2D DR motion sequence data to make the directions the same, and multiplying the 3D CT data by the 3D mask data to obtain 3D bone data. The DR sequence is partitioned by network/conventional methods. Then enter the registration process, specifically include: registration separately for each bone: setting a loss function (NCC or SSIM), setting a search range of 6 degrees of freedom (only 3 degrees of freedom are searched, and the other three ranges are set to zero), performing rigid transformation, performing analog projection (3D > 2D), modifying transformation parameters by a powell algorithm until a termination condition of powell is reached, namely registration is completed, and generating CBCT data of 4D according to the transformation parameters.
As shown in fig. 3, the registration method first guides the original DR data at the knee joint of the patient, segments the original DR data by XNet to obtain a segmented mask, obtains bone data, and registers the patella and the tibia using the bone data MT. On the other hand, the raw DR data is preprocessed and further subjected to MT (magnetization transfer technique, magnetization transfer), and the femur is registered by the preprocessed raw DR data MT, and parameters are recorded. Finally, the 3D data is subjected to rigid transformation according to the registered parameters of the femur, the patella and the tibia to generate 4D data (3D bone mask is needed).
Among them, cropping (crop), ROI reduction, magnetization transfer technique (MT), which is one of new MR imaging techniques introduced in recent years, is to increase image contrast by physical means or to make a new contrast.
The specific implementation steps of the knee joint are as follows:
step 1, importing two-dimensional motion DR sequence (original DR data) and three-dimensional standing position CBCT reconstruction data at knee joint of a patient.
Specifically, the data imported in the step 1 should come from the same scanning parameters of the standing position CBCT device, and the orientation of the data is kept consistent, and meanwhile, the scanning parameters of the device are saved for the subsequent registration procedure, mainly including the pose resolution of the CT machine.
Step 2, preprocessing the internal CT dataset, and training a network model capable of conducting skeleton segmentation on standing position CBCT reconstruction data of the knee joint by using nnUNet (which can be called a first network model).
Specifically, step 2 performs isotropic resampling on the internal CT dataset, cuts out the ROI at the knee joint, performs voxel-level knee joint bone data labeling, and then performs projection reconstruction under the scanning parameters of the real standing position CBCT device by using a simulated CBCT program to migrate the spiral CT dataset into a CBCT-style dataset.
And 3, segmenting the imported standing position CBCT reconstruction data by using the network model trained in the step 2.
Specifically, step 3 uses a network model (e.g., nnUNet) to segment the knee stance CBCT data into four categories: 1) femur, 2) patella, 3) tibia and fibula, and 4) background.
Step 4, segmenting the imported motion DR sequence using a second network model (e.g., XNet, or other point-of-key object detection network).
Specifically, step 4 uses XNet to segment the imported motion DR sequence into three categories: 1) bone, 2) soft tissue, and 3) exposed areas.
And 5, performing edge extraction on the imported motion DR sequence by using a traditional method.
Specifically, a Canny edge detection algorithm is used on the motion DR sequence. Since the imaging quality of the knee joint region is clear, the XNet segmentation result in step 4 meets the requirement of the subsequent registration procedure, but the result of edge extraction can also be used as verification.
When the preset joint is a knee joint, calculating a rotation angle of bones in the knee joint by using motion bone segmentation data of the knee joint; rotating the standing bone segmentation data based on the rotation angle of bones in the knee joint, and performing simulated projection based on scanning parameters of a standing three-dimensional CT device; and registering the femur, the tibia, the fibula and the patella in the knee joint in sequence based on a template matching method to obtain translation parameters of each bone in the knee joint.
Specifically, step 6, a 2D-3D registration procedure of the knee joint is performed on the input data after preprocessing is completed, and transformation parameters corresponding to each frame of the motion DR sequence are calculated.
Firstly, calculating a rotation angle of bones by using a DR sequence segmentation result in the step 4, then rotating the standing position CBCT bone data which is segmented in the step 3, performing simulated projection according to scanning parameters of a real standing position CBCT device, and converting a 2D-3D registration task into 2D-2D registration of a motion DR sequence and the simulated projection. After the rotation is completed, the femur, the tibiofibula and the patella are registered sequentially by using a method based on template matching (MATCH TEMPLATE), the optimization index is NCC, and the registration ROI of the DR image is reduced by using the registration result of the previous bone so as to improve the registration accuracy, and the translation parameter of each bone is obtained.
Performing rigid transformation on standing position skeleton segmentation data of the knee joint based on the transformation parameters; the rigid transformation of each bone in the knee joint involves three degrees of freedom: translation of the three-dimensional data along the x, z axes and rotation about the y axis; wherein the rotation angle about the y-axis is the rotation angle of the bone in the knee joint; the translation parameters along the x, z axes are the translation parameters of each bone in the knee joint.
Illustratively, standing CBCT bone data is rigidly transformed, as in step 7.
Specifically, in the rigid transformation of CBCT bone data for the standing position of the knee joint, the transformation of the femur, tibiofibula and patella all involves three degrees of freedom: translation of the three-dimensional data along the x, z axes and rotation about the y axis. Wherein the rotation of the data around the y-axis is calculated in step 4 according to the segmentation result of the DR sequence, and the translation of the three-dimensional data along the x-axis and the z-axis is obtained in step 6 by using a template matching method.
And finally, obtaining a series of three-dimensional data of standing position CBCT skeleton data corresponding to the motion DR sequence frame by frame in the step 7, and considering the three-dimensional data as a 4D imaging result. The schematic diagram of the 4D imaging result of the knee joint data is shown in fig. 4, and the DR sequences of the 8 th frame, the 16 th frame, the 24 th frame and the 32 th frame, the corresponding registration results and the three-dimensional volume rendering are respectively shown.
Taking a hip joint as an example, referring to fig. 5, the method of the present invention may be referred to as a 4D standing CBCT imaging method based on motion DR sequence registration, wherein the registration method comprises: 1. correcting the translation of xray sequences (DR motion sequences) in the horizontal direction, 2 calculating the rotation center of femur of xray sequences, 3 calculating the femur angle of each xray image in a walking way, 4 calculating the rotation center of femur CT data, 5 calculating the rotation femur CT data, 6 calculating the displacement of the horizontal direction by template matching, 7 recovering the translation in the first step, completing registration, and generating 4D CBCT data according to parameters.
The specific implementation steps of the hip joint are as follows:
Step 1, importing a motion DR sequence and standing position CBCT reconstruction data of a hip joint of a patient.
Specifically, the data imported in step 1 should be from the same scanning parameters of the standing CBCT apparatus, and the orientation of the data is kept consistent, while the scanning parameters of the apparatus are saved for the subsequent registration procedure.
Step 2, preprocessing the internal CT data set, and training a network model capable of conducting skeleton segmentation on standing position CBCT reconstruction data of the hip joint by using a first network model (such as nnUNet).
Specifically, step 2 performs isotropic resampling on the internal data set, cuts out an ROI at the hip joint, performs voxel-level hip joint bone data labeling, and then performs projection reconstruction under the scanning parameters of the real standing position CBCT device by using a simulated CBCT program so as to migrate the spiral CT data set into a CBCT-style data set.
And 3, segmenting the imported standing position CBCT reconstruction data by using the network model trained in the step 2.
Specifically, step 3 uses a network model to segment the hip stance CBCT data into three categories: 1) hip bone, 2) femur, 3) background.
Step 4, segmenting the imported motion DR sequence using a second network model (e.g., XNet).
Specifically, step 4 uses XNet to segment the imported motion DR sequence into three categories: 1) bone, 2) soft tissue, and 3) exposed areas.
And 5, performing edge extraction on the imported motion DR sequence by using a traditional method.
Specifically, a Canny edge detection algorithm is used on the motion DR sequence. Because the human body thickness of the hip joint region is large, the imaging quality is poor, the XNet segmentation result in the step 4 cannot completely meet the requirement of the subsequent registration procedure, and the result of edge extraction is required to supplement bone information.
When the preset joint is a hip joint, detecting the femur edge in the edge image of the motion DR sequence through Hough transformation and obtaining the rotation angle of the femur in the hip joint; rotating the standing position skeleton segmentation data based on the rotation angle of the femur in the hip joint, and performing simulated projection based on the scanning parameters of the standing position three-dimensional CT equipment to obtain a simulated projection image; and calculating the rotation center of the femur in the simulated projection image, and obtaining the translation parameter of the femur according to the displacement difference between the motion two-dimensional image and the rotation center of the femur in the simulated projection image.
And 6, executing a 2D-3D registration program of the hip joint on the preprocessed input data, and calculating transformation parameters corresponding to each frame of the motion DR sequence.
Specifically, the motion DR sequence is first processed using deep learning in combination with conventional methods, and then the femoral edges in the motion DR sequence edge map are detected in three steps by Hough Transform (Hough Transform) using a coarse-to-fine strategy: firstly, detecting long straight lines in a motion DR sequence to determine the position of a femur rotation center in a DR image, then detecting short straight lines, close to the femur rotation center, in the DR image frame by frame to obtain the femur rotation angle, for two detection, not finding a special frame with a proper femur angle, and using the femur angle of an adjacent frame to conduct interpolation to serve as the femur angle of the frame. After the rotation is completed, the priori knowledge in the CBCT data is used for calculating the rotation center of the femur in the simulated projection, and the translation parameter of the femur is obtained according to the displacement difference between the real DR image and the rotation center of the femur in the simulated projection image. And finally, calculating tiny translation of bones in the horizontal direction according to the deep learning segmentation template of the motion DR sequence, and obtaining more accurate transformation parameters. A 2D-3D registration flow of hip joint data is shown in fig. 6.
Performing rigid transformation on standing position skeleton segmentation data of the hip joint based on the transformation parameters; the rigid transformation of the femur in the hip joint involves three degrees of freedom: translation of the three-dimensional data along the x, z axes and rotation about the y axis; the rotation angle around the y-axis is the rotation angle of the bone in the hip joint; the translation parameters along the x and z axes are the translation parameters of each bone in the hip joint; the rigid transformation of the hip bone in the hip joint involves two degrees of freedom: translation of the three-dimensional data along the x, z axes; and the translation parameters along the x and z axes are the sum of displacement difference of the femur rotation center in the two-dimensional moving image and the simulated projection image and the translation of bones in the horizontal direction according to the depth learning segmentation template of the moving two-dimensional image sequence.
Illustratively, standing CBCT bone data is rigidly transformed, as in step 7.
Specifically, in the rigid transformation of the CBCT bone data of the standing position of the hip joint, the transformation of the femur involves three degrees of freedom: translation of the three-dimensional data along the x, z axes and rotation about the y axis. The rotation of the data around the y axis is obtained by detecting the femur edge in the motion DR sequence edge map in three steps through Hough Transform (Hough Transform), and the translation of the three-dimensional data along the x and z axes is obtained by calculating the displacement difference of the femur rotation center in the real DR image and the simulated projection image in the step 6; the transformation of the hip bone involves two degrees of freedom: and (3) translating the three-dimensional data along the x and z axes, wherein the displacement difference of the rotation center of the femur in the real DR image and the simulated projection image is calculated by the step (6), and the micro translation of the skeleton in the horizontal direction is calculated according to the depth learning segmentation template of the motion DR sequence.
And finally, obtaining a series of three-dimensional data of standing position CBCT skeleton data corresponding to the motion DR sequence frame by frame in the step 7, and considering the three-dimensional data as a 4D imaging result. A schematic of the results of 4D imaging of hip data is shown in fig. 7.
In some embodiments, the present invention further provides a three-dimensional imaging device for 4D joint standing position, and a method for applying the three-dimensional imaging device for 4D joint standing position, including:
the data acquisition module is used for acquiring a motion two-dimensional image sequence of a joint preset by a user and three-dimensional reconstruction data of a standing position;
the model training module is used for training a first network model for performing skeleton segmentation on the standing position three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT data set;
the first segmentation module is used for carrying out skeleton segmentation on the standing position three-dimensional reconstruction data through the first network model to obtain standing position skeleton segmentation data of the preset joint;
the second segmentation module is used for segmenting the motion two-dimensional image sequence of the preset joint by using a second network model to obtain motion skeleton segmentation data of the preset joint;
The registration calculation module is used for calculating transformation parameters corresponding to each frame of the motion skeleton segmentation data of the preset joint through a 2D-3D registration method;
And the four-dimensional imaging module is used for calculating standing position three-dimensional imaging data based on the standing position skeleton segmentation data of the preset joint and the transformation parameters so as to generate 4D standing position three-dimensional imaging.
In the embodiments, the descriptions of the embodiments are focused on, and the parts of a certain embodiment that are not described or depicted in detail can be referred to for related descriptions of other embodiments.
It should be noted that the above embodiments can be freely combined as needed. The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (8)

1. A method for three-dimensional imaging of a 4D joint standing position, comprising:
acquiring a motion two-dimensional image sequence of a joint preset by a user and three-dimensional reconstruction data of a standing position;
training a first network model for bone segmentation of standing three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT data set;
Performing skeleton segmentation on the standing position three-dimensional reconstruction data through the first network model to obtain standing position skeleton segmentation data of the preset joint; the method specifically comprises the following steps: performing voxel level segmentation on each bone in the standing position three-dimensional reconstruction data through the first network model to obtain standing position bone segmentation data of the preset joint; the preset joints comprise knee joints and hip joints; wherein the standing position skeleton segmentation data of the knee joint comprises femur, patella, tibia and fibula; the standing position skeleton segmentation data of the hip joint comprises hip bones and thighbones;
Dividing the motion two-dimensional image sequence of the preset joint by using a second network model to obtain motion skeleton division data of the preset joint; the method specifically comprises the following steps: dividing the motion two-dimensional image sequence of the preset joint through the second network model to obtain motion skeleton division data of the preset joint; wherein the moving bone segmentation data comprises a bone region, a soft tissue region, and an open beam region;
Calculating transformation parameters corresponding to each frame of the moving skeleton segmentation data of the preset joint by a 2D-3D registration method;
calculating standing position three-dimensional imaging data based on standing position skeleton segmentation data of the preset joint and the transformation parameters to generate 4D standing position three-dimensional imaging;
the first network model and the second network model are deep learning network models.
2. The method of claim 1, further comprising, after the segmenting the moving two-dimensional image sequence of the preset joint using the second network model to obtain moving bone segmentation data of the preset joint:
and carrying out edge extraction on the motion skeleton segmentation data of the preset joint.
3. The method for three-dimensional imaging of a standing position of a 4D joint according to claim 2, wherein the calculating, by a 2D-3D registration method, transformation parameters corresponding to each frame of the moving bone segmentation data of the preset joint includes:
When the preset joint is a knee joint, calculating a rotation angle of bones in the knee joint by using motion bone segmentation data of the knee joint;
Rotating the standing bone segmentation data based on the rotation angle of bones in the knee joint, and performing simulated projection based on scanning parameters of a standing three-dimensional CT device;
and registering the femur, the tibia, the fibula and the patella in the knee joint in sequence based on a template matching method to obtain translation parameters of each bone in the knee joint.
4. The method for three-dimensional imaging of a standing position of a 4D joint according to claim 2, wherein the calculating, by a 2D-3D registration method, transformation parameters corresponding to each frame of the moving bone segmentation data of the preset joint includes:
when the preset joint is a hip joint, detecting the femur edge in the edge image of the motion DR sequence through Hough transformation and obtaining the rotation angle of the femur in the hip joint;
Rotating the standing position skeleton segmentation data based on the rotation angle of the femur in the hip joint, and performing simulated projection based on the scanning parameters of the standing position three-dimensional CT equipment to obtain a simulated projection image;
and calculating the rotation center of the femur in the simulated projection image, and obtaining the translation parameter of the femur according to the displacement difference between the motion two-dimensional image and the rotation center of the femur in the simulated projection image.
5. The 4D joint stance three-dimensional imaging method of claim 1, wherein the calculating stance three-dimensional imaging data based on stance bone segmentation data of the preset joint and the transformation parameters comprises:
When the preset joint is a knee joint, performing rigid transformation on the standing position skeleton segmentation data of the knee joint based on the transformation parameters; the rigid transformation of each bone in the knee joint involves three degrees of freedom: translation of the three-dimensional data along the x, z axes and rotation about the y axis;
Wherein the rotation angle about the y-axis is the rotation angle of the bone in the knee joint; the translation parameters along the x, z axes are the translation parameters of each bone in the knee joint.
6. The 4D joint stance three-dimensional imaging method of claim 1, wherein the calculating stance three-dimensional imaging data based on stance bone segmentation data of the preset joint and the transformation parameters comprises:
when the preset joint is a hip joint, performing rigid transformation on the standing position skeleton segmentation data of the hip joint based on the transformation parameters;
The rigid transformation of the femur in the hip joint involves three degrees of freedom: translation of the three-dimensional data along the x, z axes and rotation about the y axis; the rotation angle around the y-axis is the rotation angle of the bone in the hip joint; the translation parameters along the x and z axes are the translation parameters of each bone in the hip joint;
the rigid transformation of the hip bone in the hip joint involves two degrees of freedom: translation of the three-dimensional data along the x, z axes; and the translation parameters along the x and z axes are the sum of displacement difference of femur rotation centers in the two-dimensional moving image and the simulated projection image and translation of bones in the horizontal direction according to the depth learning segmentation template of the moving two-dimensional image sequence.
7. The method of any one of claims 1-6, further comprising, prior to training the first network model for bone segmentation of the preset joint stance three-dimensional reconstruction data based on the preprocessed internal CT dataset:
Isotropic resampling, ROI clipping of the preset joint, filling and voxel-level bone data labeling are performed on the internal CT dataset;
Projection reconstruction is performed under the scan parameters of a standing position three-dimensional CT apparatus using a simulated CBCT procedure to migrate the data set of spiral CT to the data set of CBCT style.
8. A 4D joint standing position three-dimensional imaging device, wherein the method for applying the 4D joint standing position three-dimensional imaging according to any one of claims 1 to 7 comprises:
the data acquisition module is used for acquiring a motion two-dimensional image sequence of a joint preset by a user and three-dimensional reconstruction data of a standing position;
the model training module is used for training a first network model for performing skeleton segmentation on the standing position three-dimensional reconstruction data of the preset joint based on the preprocessed internal CT data set;
the first segmentation module is used for carrying out skeleton segmentation on the standing position three-dimensional reconstruction data through the first network model to obtain standing position skeleton segmentation data of the preset joint;
the second segmentation module is used for segmenting the motion two-dimensional image sequence of the preset joint by using a second network model to obtain motion skeleton segmentation data of the preset joint;
The registration calculation module is used for calculating transformation parameters corresponding to each frame of the motion skeleton segmentation data of the preset joint through a 2D-3D registration method;
And the four-dimensional imaging module is used for calculating standing position three-dimensional imaging data based on the standing position skeleton segmentation data of the preset joint and the transformation parameters so as to generate 4D standing position three-dimensional imaging.
CN202410139598.6A 2024-02-01 2024-02-01 Three-dimensional imaging method and device for 4D joint standing position Active CN117671162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410139598.6A CN117671162B (en) 2024-02-01 2024-02-01 Three-dimensional imaging method and device for 4D joint standing position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410139598.6A CN117671162B (en) 2024-02-01 2024-02-01 Three-dimensional imaging method and device for 4D joint standing position

Publications (2)

Publication Number Publication Date
CN117671162A CN117671162A (en) 2024-03-08
CN117671162B true CN117671162B (en) 2024-04-19

Family

ID=90066449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410139598.6A Active CN117671162B (en) 2024-02-01 2024-02-01 Three-dimensional imaging method and device for 4D joint standing position

Country Status (1)

Country Link
CN (1) CN117671162B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450396A (en) * 2021-06-17 2021-09-28 北京理工大学 Three-dimensional/two-dimensional image registration method and device based on bone features
CN115018977A (en) * 2022-05-25 2022-09-06 吉林大学 Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007041108A1 (en) * 2007-08-30 2009-03-05 Siemens Ag Method and image evaluation system for processing medical 2D or 3D data, in particular 2D or 3D image data obtained by computer tomography

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450396A (en) * 2021-06-17 2021-09-28 北京理工大学 Three-dimensional/two-dimensional image registration method and device based on bone features
CN115018977A (en) * 2022-05-25 2022-09-06 吉林大学 Semi-automatic registration method based on biplane X-ray and joint three-dimensional motion solving algorithm

Also Published As

Publication number Publication date
CN117671162A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN111915696B (en) Three-dimensional image data-aided low-dose scanning data reconstruction method and electronic medium
Berger et al. Marker‐free motion correction in weight‐bearing cone‐beam CT of the knee joint
Penney et al. Validation of a two‐to three‐dimensional registration algorithm for aligning preoperative CT images and intraoperative fluoroscopy images
CN101138010B (en) Image processing system and method for registration of two-dimensional with three-dimensional volume data during interventional procedures
CN104254874B (en) Method and system for aiding in 2D 3D renderings registration
JP4965433B2 (en) Cone beam CT apparatus using truncated projection and pre-acquired 3D CT image
JP5345947B2 (en) Imaging system and imaging method for imaging an object
US7978886B2 (en) System and method for anatomy based reconstruction
CN111656405A (en) Reducing metal artifacts using deep learning
Zhang et al. 3-D reconstruction of the spine from biplanar radiographs based on contour matching using the hough transform
CN103544690A (en) Method for acquisition of subtraction angiograms
US10902585B2 (en) System and method for automated angiography utilizing a neural network
Ohnishi et al. Three-dimensional motion study of femur, tibia, and patella at the knee joint from bi-plane fluoroscopy and CT images
CN106780649A (en) The artifact minimizing technology and device of image
Miao et al. A hybrid method for 2-D/3-D registration between 3-D volumes and 2-D angiography for trans-catheter aortic valve implantation (TAVI)
CN114037803B (en) Medical image three-dimensional reconstruction method and system
CN117671162B (en) Three-dimensional imaging method and device for 4D joint standing position
CN114469153B (en) Angiography device and equipment based on CT (computed tomography) image and computer readable medium
Zhang et al. An automatic ICP-based 2D-3D registration method for a high-speed biplanar videoradiography imaging system
Lucas et al. An active contour method for bone cement reconstruction from C-arm X-ray images
CN117671221B (en) Data correction method, device and storage medium based on knee joint limited angle image
Shrestha et al. 3D reconstruction of bone from multi-view x-ray images using planar markers
Berger Motion-corrected reconstruction in cone-beam computed tomography for knees under weight-bearing condition
Ben-Zikri et al. Anatomical based registration of multi-sector x-ray images for panorama reconstruction
Pradhan et al. Machine Learning in Radio Imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant