WO2023078309A1 - 目标特征点提取方法、装置、计算机设备和存储介质 - Google Patents
目标特征点提取方法、装置、计算机设备和存储介质 Download PDFInfo
- Publication number
- WO2023078309A1 WO2023078309A1 PCT/CN2022/129336 CN2022129336W WO2023078309A1 WO 2023078309 A1 WO2023078309 A1 WO 2023078309A1 CN 2022129336 W CN2022129336 W CN 2022129336W WO 2023078309 A1 WO2023078309 A1 WO 2023078309A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- template
- processed
- point
- feature point
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 210000000988 bone and bone Anatomy 0.000 claims description 171
- 238000000605 extraction Methods 0.000 claims description 58
- 230000006870 function Effects 0.000 claims description 52
- 238000012545 processing Methods 0.000 claims description 44
- 238000004590 computer program Methods 0.000 claims description 28
- 238000005457 optimization Methods 0.000 claims description 22
- 238000007781 pre-processing Methods 0.000 claims description 18
- 238000010606 normalization Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 12
- 210000000689 upper leg Anatomy 0.000 claims description 8
- 238000002360 preparation method Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 32
- 238000002059 diagnostic imaging Methods 0.000 description 12
- 210000003141 lower extremity Anatomy 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 6
- 238000003709 image segmentation Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 210000003423 ankle Anatomy 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005477 standard model Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013170 computed tomography imaging Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000013150 knee replacement Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000399 orthopedic effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 210000002303 tibia Anatomy 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/35—Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
Definitions
- the present application relates to the technical field of image processing, in particular to a method, device, computer equipment and storage medium for extracting target feature points.
- the automatic extraction of bone anatomical feature points in CT or MR images can be widely used in the application scenarios of auxiliary diagnosis and auxiliary treatment based on medical images.
- people's requirements for the quality of medical services are increasing day by day.
- the precision and digitalization of bone surgery has become the trend of global medical development. Therefore, the automatic extraction of feature points can assist doctors in surgical planning, improve surgical efficiency, and is also beneficial to medical treatment. Patients in areas with relatively scarce resources enjoy better surgical outcomes.
- bone anatomical feature points are usually manually completed by experienced doctors. Since the selection of bone anatomical feature point positions is a key step in preoperative planning, the doctor's anatomical, imaging knowledge, and The requirement for clinical experience is relatively high, and manual acquisition of feature points requires a lot of time and energy for doctors, and the operation is complicated.
- the present application provides a method for extracting target feature points, including:
- the template is an image generated based on a sample image, and the standard feature points are located in the template;
- the method before the acquisition of the template corresponding to the image to be processed, the method further includes generating the template;
- said generating said template comprises:
- the calculating the statistical image corresponding to the registration image includes:
- the method further includes:
- the method further includes:
- the similarity between the statistical image and the initial template is calculated according to the distances of all corresponding points.
- the image to be processed and the sample pattern are three-dimensional mesh point cloud images; before registering the initial template with the remaining sample images, the method further includes: preprocessing the preprocessed image; and/or
- the method further includes:
- Preprocessing the sample image includes at least one of surface point cloud extraction, point cloud downsampling, and normalization;
- the extracting surface point cloud is to extract the vertices of all grids in the image to be processed and the sample image to obtain a surface point cloud;
- the downsampling of the point cloud is to divide the image to be processed into at least one processing area, and sample the point closest to the center to be processed in the processing area as the sampling point of the processing area; the normalization To align the points in the image to be processed to the same coordinate space.
- the registering the template with the image to be processed includes:
- the positional relationship between the template and the standard feature point combined with the positional relationship between the registered template and the image to be processed, determine the positional relationship between the image to be processed and the
- the target feature points corresponding to the standard feature points include:
- intersection point When the distance between the intersection point and the standard feature point is less than a preset distance, the intersection point is used as the target feature point;
- the target feature points corresponding to the standard feature points also include:
- the image to be processed is a bone image to be processed
- the standard feature point is a bone feature point
- the bone feature point includes at least one of a femur feature point and a tibial feature point.
- a method for processing bone data comprising:
- the bone feature points are processed according to preset rules.
- the processing the skeletal feature points according to preset rules includes:
- At least one of femoral mechanical axis, femoral condylar line and tibial mechanical axis is calculated according to the bone feature points.
- the processing the skeletal feature points according to preset rules includes:
- the bone feature points are optimized according to preset rules.
- the present application also provides a target feature point extraction device, including:
- a data acquisition module configured to acquire images to be processed
- a template query module configured to obtain a template corresponding to the image to be processed, and obtain the positional relationship between the template and the target feature point;
- a registration module configured to register the template with the image to be processed
- a target extraction module configured to determine the standard feature point in the image to be processed according to the positional relationship between the template and the target feature point, combined with the positional relationship between the template and the image to be processed after registration Corresponding target feature points.
- the present application also provides a computer device.
- the computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
- the present application also provides a computer-readable storage medium.
- the computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the above method are realized.
- the present application also provides a computer program product.
- the computer program product includes a computer program, and when the computer program is executed by a processor, the steps of the above method are realized.
- the above target feature point extraction method, device, computer equipment and storage medium can register the template with the image to be processed, and according to the positional relationship between the template and standard feature points and the positional relationship between the template and the image to be processed after registration, Extract the target feature points from the image to be processed to realize the automatic extraction of target feature points, thereby improving the efficiency of feature point acquisition.
- FIG. 1 is an application environment diagram of a method for extracting target feature points in an embodiment of the present application.
- FIG. 2 is a schematic flowchart of a method for extracting target feature points in an embodiment of the present application.
- FIG. 3 is a schematic diagram of segmentation and surface reconstruction of a lower limb bone in an embodiment of the present application.
- FIG. 4 is a schematic flowchart of a template generation method in another embodiment of the present application.
- Fig. 5 is a schematic diagram of acquiring statistical bone images in an embodiment of the present application.
- FIG. 6 is a schematic diagram of standard feature point configuration in an embodiment of the present application.
- FIG. 7 is a schematic diagram of comparing the similarity of two skeleton images in an embodiment of the present application.
- FIG. 8 is a schematic diagram of a point cloud downsampling method in an embodiment of the present application.
- FIG. 9 is a schematic diagram of data preprocessing in an embodiment of the present application.
- FIG. 10 is a schematic diagram of normalization of bone point cloud data in an embodiment of the present application.
- Fig. 11 is a schematic diagram of non-rigid registration in an embodiment of the present application.
- FIG. 12 is a schematic diagram of a registration function optimization process using the Expectation-Maximization (EM) algorithm in an embodiment of the present application.
- EM Expectation-Maximization
- FIG. 13 is a schematic diagram of the principle of surface feature point extraction in an embodiment of the present application.
- Fig. 14 is a schematic diagram of the principle of surface feature point extraction in another embodiment of the present application.
- FIG. 15 is a schematic diagram of the principle of extracting non-surface bone feature points in an embodiment of the present application.
- FIG. 16 is a schematic flowchart of a method for processing skeleton data in an embodiment of the present application.
- FIG. 17 is a schematic diagram of feature point position optimization in an embodiment of the present application.
- FIG. 18 is a structural block diagram of an object feature point extraction device in an embodiment of the present application.
- Fig. 19 is a structural block diagram of a skeleton data processing device in an embodiment of the present application.
- FIG. 20 is an internal structure diagram of a computer device in an embodiment of the present application.
- the target feature point extraction method provided in this application can be applied to the application environment shown in FIG. 1 .
- the terminal 102 communicates with the medical imaging device 104 through a network.
- the terminal 102 can receive the 3D image scanned by the medical imaging device 104 and stored in a 3D matrix, and perform 3D reconstruction on the 3D image to obtain an image to be processed, and then obtain a pre-generated template corresponding to the image to be processed;
- the template and the image to be processed are registered, and the target feature points corresponding to the standard feature points in the image to be processed are determined according to the positional relationship between the template and the standard feature points, combined with the positional relationship between the registered template and the image to be processed .
- the target feature points corresponding to the standard feature points in the image to be processed can be automatically extracted according to the registered template through the mapping relationship, so that there is no need to manually extract the target feature points from the image to be processed Extracting the target feature points in the method saves a lot of time and improves the efficiency.
- the terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, as well as functional modules and dedicated circuits of the medical imaging device itself.
- Medical imaging equipment 104 includes but is not limited to various imaging equipment, such as CT imaging equipment (CT: Computed Tomography, which utilizes precisely collimated X-ray beams and highly sensitive detectors to surround a certain part of the human body one by one.
- CT imaging equipment Computed Tomography
- CT scans Cross-sectional scans, and accurate three-dimensional position images of tumors can be reconstructed through CT scans
- magnetic resonance equipment which is a type of tomography, which uses magnetic resonance phenomena to obtain electromagnetic signals from the human body and reconstruct human body information images
- positron emission computed tomography Positron Emission Computed Tomography
- PET/MR positron emission magnetic resonance imaging system
- a method for extracting target feature points is provided.
- the method is applied to the terminal in FIG. 1 as an example for illustration, including the following steps:
- the image to be processed is three-dimensional surface mesh data, which may be obtained by performing three-dimensional reconstruction on a three-dimensional image collected by a medical imaging device.
- the data collected by the medical imaging device is three-dimensional surface grid data, there is no need to perform three-dimensional reconstruction on it.
- medical imaging data such as 3D scanned CT or MR are generally 3D images, that is, medical image data stored in the form of a 3D matrix; the 3D images include targets to be processed, such as where the target feature points are located. target bones or organs.
- the three-dimensional reconstruction may specifically include: firstly, the terminal performs image segmentation on the target to be processed in the three-dimensional matrix through image segmentation technology to obtain mask data stored in the form of a three-dimensional matrix, and then performs three-dimensional reconstruction on the mask data to obtain the image to be processed .
- the image segmentation technology includes but is not limited to image segmentation technology based on deep learning full convolution network, or based on traditional machine learning (such as random forest, etc.), or based on clustering, region growing, active contour, level set, threshold method, etc. technology.
- Methods for three-dimensional reconstruction of mask data include but are not limited to Marching Cube algorithm, using Marching Cube algorithm to perform interpolation reconstruction near the contour according to the surface threshold, and Poisson surface reconstruction algorithm.
- FIG. 3 is a schematic diagram of segmentation and surface reconstruction of lower limb bones in an embodiment.
- the terminal uses image segmentation technology to segment the CT image to obtain bone data of the truncated plane, sagittal plane and coronal plane, that is, to extract the bone pixels in the image to be processed, and then use the surface reconstruction method to segment the bone pixels
- the surface of the grid is expressed in the form of mesh data, and the bone image to be processed can be obtained.
- the bone image to be processed can be used for subsequent registration.
- S204 Obtain a template corresponding to the image to be processed, and obtain a positional relationship between the template and standard feature points; the template is an image generated based on the sample image, and the standard feature points are located in the template.
- the template is pre-generated based on the sample image, which is used to characterize the standard shape of the bone or organ corresponding to the image to be processed.
- the template can be generated according to the sample images of users collected before the operation, or a template suitable for a large number of users generated according to a large number of sample images of different users, for example, obtained according to the average image of the sample images, etc., so that it is not necessary to Each user generates templates before operation.
- the standard feature points are the feature points of the bones or organs selected in the template.
- the standard feature points can be manually selected by doctors in the template. What needs to be explained here is that feature lines, feature surfaces, and feature areas can all be It is regarded as composed of feature points.
- the standard feature points correspond to the target feature points in the image to be processed.
- the positional relationship between the template and the standard feature points is data used to characterize the position of the standard feature points in the template, where the positional relationship can be determined in the image coordinate system where the template is located.
- the template and the positional relationship between the standard feature points and the template are pre-generated.
- the new standard feature point can also be calibrated in the template in real time, which is not specifically limited here.
- the template data can be stored according to the type of bone when storing, so that after the image to be processed is acquired, the corresponding template that has been stored can be selected according to the type of bone corresponding to the image to be processed.
- the registration here refers to surface registration, which unifies the three-dimensional surface grid data in the template and the three-dimensional surface grid data of the image to be processed into the same coordinate system.
- the position of the three-dimensional surface grid data in the template and the position of the three-dimensional surface grid data of the image to be processed can be realized one-to-one correspondence through registration, thereby laying a foundation for obtaining target feature points in the image to be processed.
- the surface registration may include but not limited to non-rigid registration algorithms.
- the target feature point refers to the feature point that the standard feature point in the template maps to the feature point on the image to be processed after being registered with the image to be processed.
- the grids in the image are in one-to-one correspondence, so that the standard feature points in the template also correspond to the target feature points in the image to be processed, and the target feature points are the feature points to be extracted.
- the terminal can extract target feature points corresponding to multiple standard feature points in parallel, thereby improving the efficiency of target feature point extraction. efficiency.
- the terminal may also output the target feature points, so as to facilitate examination by a doctor or the like.
- the doctor confirms the extracted target feature points, the extracted target feature points are correct; if there is a problem, you can receive an adjustment instruction for the target feature point, and fine-tune the target feature point according to the adjustment instruction to ensure that the output The accuracy of the target feature points.
- the template can be registered with the image to be processed, and the target feature point can be extracted from the image to be processed according to the positional relationship between the template and the standard feature point and the positional relationship between the template and the image to be processed after registration , to achieve automatic extraction of target feature points to improve efficiency.
- the template generation method may include:
- the sample image is 3D surface grid data, which may be obtained by reconstructing a 3D image collected by a medical imaging device.
- 3D reconstruction method please refer to the above description.
- the lower extremity bone is still taken as an example for illustration.
- the terminal collects a large number of lower extremity orthopedic medical image data of different patients as a training set, and then divides and reconstructs the medical image data in the training set according to the above three-dimensional reconstruction method to obtain samples. image.
- S404 Select an initial template from several sample images.
- the initial template may be any one randomly selected from several sample images. It should be noted that when there is only one set of sample patterns, it is directly used as a template; if there are at least two sets of sample images, any sample image is selected from the sample images as the initial template.
- the registration image is obtained by using a registration algorithm to register the initial template to the remaining sample images, for example, using a non-rigid registration algorithm to map the initial template to other remaining sample images to obtain a registration image.
- the statistical image is calculated from the registered image according to certain rules, for example, it is obtained by averaging, calculating the maximum value, calculating the median value, etc. of the positions of the midpoints of the registered image.
- the statistical image can be used as a representative to reflect the overall situation of the registration.
- the statistical image can be used for subsequent similarity comparison with the initial template to further obtain the similarity between the statistical image and the initial template.
- the similarity is a quantitative value that reflects the similarity between the statistical image and the initial template. The higher the similarity between the statistical image and the initial template, the more similar the statistical image is to the initial template; otherwise, the less similar it is.
- the similarity can be calculated according to the distance between the statistical image and the corresponding point in the initial template.
- the similarity between the statistical image and the initial template meets the requirements, which means that the similarity between the statistical image and the initial template is greater than or equal to the preset threshold.
- the terminal considers that the statistical image is sufficiently similar to the initial template, and then uses the statistical image as A template corresponding to the sample image, for example, the sample image is a lower limb bone image of a patient, and the template is a lower limb bone template.
- the similarity between the statistical image and the initial template is less than the preset threshold, then use the currently obtained statistical image as the initial template for the next iteration, and continue to register the current initial template with the sample image to obtain a registered image, and calculate Register the statistical image corresponding to the image until the similarity between the statistical image and the initial template is greater than or equal to a preset threshold to obtain the final template.
- the preset threshold of the similarity can be adjusted according to the actual situation.
- the template is obtained by iteratively registering the sample image and the initial template, and calculating the statistical image corresponding to the registered image and comparing the similarity between the statistical image and the initial template.
- the template obtained in this way It is more realistic and accurate, and can lay a good foundation for subsequent registration between the image to be processed and the standard image and obtaining the target feature points.
- the step of calculating the statistical image corresponding to the registration image includes: obtaining the initial position of the corresponding point in each registration image; calculating the average value of each initial position of the corresponding point as the target position of the corresponding point, and according to the corresponding The target position of the point generates a statistical image.
- obtaining the initial position of the corresponding point in the registration image means that after the sample image in the training set is registered with the initial template, the terminal can obtain the position of the corresponding point in the grid data in the registration image, and then calculate the position of the point Averaging gives the average position and generates a statistical image from the average position of all points.
- FIG. 5 is a schematic diagram of acquiring statistical bone images in an embodiment.
- the corresponding statistical image can be accurately obtained by registering the corresponding points of the grid data in the image for calculation.
- the target feature point extraction method further includes: receiving a standard feature point configuration instruction for the template; and configuring corresponding standard feature points in the template according to the standard feature point configuration instruction.
- the standard feature point configuration instruction is a computer instruction for obtaining standard feature points on the template, which may be input by the user according to the application scenario, for example, the standard feature point configuration instruction may be selected by the doctor in the standard bone template Instructions for anatomical feature points, feature surfaces, or feature lines; configure corresponding standard feature points in the template according to the configuration instructions, specifically means that after the terminal receives the standard feature point configuration instructions, it configures the corresponding standard feature points on the template according to the standard feature point configuration instructions.
- Mark feature points, feature surfaces or feature lines for example, mark feature points such as the distal point of the lateral femoral condyle and the distal point of the medial femoral condyle on the femur.
- FIG. 6 is a schematic diagram of standard feature point configuration in an embodiment, and its standard feature point configuration instruction is to configure feature points for anatomical feature points in standard lower limb bones;
- the corresponding anatomical feature points configured in the standard lower limb bone include hip joint center 1, lateral femoral condyle 2, medial femoral condyle 3, femoral intercondylar notch 4, distal point of lateral femoral condyle 5, distal point of medial femoral condyle 6, femoral Any one or more of the posterior endpoint of the lateral condyle 7, the posterior endpoint of the medial femoral condyle 8, the lateral tibial plateau 9, the medial tibial plateau 10, the tibial spine 11, the tibial tuberosity 12, the lateral ankle 13, the medial ankle 14, and the midpoint 15 of the ankle indivual.
- required features can be obtained on the template, and these feature points can be used to subsequently determine corresponding feature points in the image to be processed.
- the target feature point extraction method further includes: calculating the distance between the statistical image and the corresponding point in the initial template; calculating the distance between the statistical image and the initial template according to the distance of all corresponding points template similarity.
- the terminal first calculates the distance between the statistical image and the corresponding points of each set of grid data in the initial template, and calculates the similarity between the statistical image and the initial template according to the distance between the corresponding points of each set of grid data .
- the similarity can be expressed as the reciprocal of the average distance between all corresponding points.
- FIG. 7 is a schematic diagram of the similarity of two skeleton images in an embodiment, and the terminal calculates the distance d i between each point P'(i) and the corresponding point P(i) in the initial skeleton template, And the corresponding similarity, where the similarity can be expressed as:
- m is the number of points contained in the initial bone template
- d m represents the distance between the mth corresponding points. If the average distance between the corresponding points of the statistical skeleton image and each set of grid data of the initial skeleton template is smaller, it means that the similarity between the statistical skeleton image and the initial skeleton template is greater.
- the similarity is greater than a certain threshold, it is considered that the statistical bone image is sufficiently similar to the standard bone template, and the statistical bone image obtained at this time can be used as a standard model.
- the threshold of similarity can be adjusted according to the actual situation.
- the standard model corresponding to the sample image can be accurately obtained by calculating the similarity between the statistical image and the initial template.
- the image to be processed and the sample image are three-dimensional grid point cloud images; before registering the initial template with the remaining sample images, the target feature point extraction method further includes: preprocessing the image to be processed; And or before the template is registered with the image to be processed, the target feature point extraction method also includes: preprocessing the sample image; the preprocessing includes extracting surface point clouds, point cloud downsampling, normalization at least One; where extracting the surface point cloud is to extract the vertices of all grids in the image to be processed and the sample image to obtain the surface point cloud; downsampling of the point cloud is to divide the image to be processed into at least one processing area, and divide the processing area into The sampling point closest to the center of the area to be processed is the sampling point of the processing area; normalization is to align the points in the image to be processed to the same coordinate space.
- the processing area refers to dividing the entire space equidistantly according to preset intervals, and the preset distances may be divided according to actual application scenarios.
- the preset distance is L
- the entire space may be divided into several processing areas with a distance L.
- the sampling point refers to the point selected from the image to be processed according to the preset rules. For example, the sampling point can be obtained by dividing the processing area of the image to be processed and selecting the point closest to the center of the processing area from the processing area.
- extracting the surface point cloud refers to extracting all the vertices of the mesh to obtain the surface point cloud of the image to be processed and/or the sample image.
- point cloud down-sampling refers to dividing the entire space into several small cube spaces or processing areas according to a certain distance L in the space where the input point cloud is located.
- Each small cube space may include images to be processed and The points of the surface point cloud of the sample image may or may not be included. If there is only one point in the small cube space including the points of the surface point cloud, it will be kept directly, otherwise, the distance from each point to the center point of the small cube space will be calculated, and only the point closest to the center will be kept as the sampling point, and the remaining points will be removed.
- FIG. 8 is a schematic diagram of a point cloud downsampling method in an embodiment, wherein the solid points are the points closest to the grid center, and the hollow points are other points. After the point cloud downsampling, only The point closest to the center of the cube is the solid point in the figure.
- Fig. 9 is a schematic diagram of data preprocessing in an embodiment, the standard bone template and the patient bone image can be obtained after the surface point cloud is extracted and the point cloud is down-sampled. Sparse point cloud .
- normalization refers to transforming the sample pattern and the image to be processed into the same coordinates, making subsequent data processing more convenient. For example, if all sample images, images to be processed and their corresponding templates are not taken at the same position, that is, all sample images, images to be processed and corresponding templates are not in the same coordinate space, then all sample images, The image to be processed and its corresponding template are aligned to the same coordinate space, for example, the sampling points are aligned to the same coordinate space.
- the centroid coordinates C center positions of all points of all sample images, images to be processed and their corresponding templates are first calculated, and then the point cloud Translate-C so that its centroid coincides with the origin of the coordinate system, and then calculate the variance Var of the point cloud coordinates after translation, and divide the coordinates of each point in the point cloud by That is, the normalized point cloud data is obtained.
- Figure 10 is a schematic diagram of the normalization of bone point cloud data in an embodiment, the terminal first adjusts the mean value of the bone point cloud to 0, and then adjusts the variance of the point cloud to 1 to obtain normalized processing femur point cloud data.
- the calculation speed and convergence speed of subsequent registration operations can be accelerated by preprocessing the sample image and the image to be processed.
- the step of registering the template with the image to be processed includes: obtaining a registration function, and initializing the registration function; inputting the image to be processed and the template into the registration function to perform an adjustment on the parameters in the registration function Optimization; when the variation of the parameters of the registration function after optimization and before optimization is less than the preset standard, it is judged that the registration between the template and the image to be processed is completed, otherwise continue to input the image to be processed and the template into the registration function after parameter optimization To optimize the parameters in the registration function.
- the registration function refers to a program for realizing the registration of the template and the image to be processed.
- a registered template can be obtained.
- the terminal first obtains the corresponding registration function and initializes the registration function, wherein the initialization registration function includes the parameters of the initialization registration function; the image to be processed and the template are input into the registration function to optimize the parameters in the registration function to To obtain the corresponding registration image, specifically, according to the parameters of the current registration function, the input template and the image to be processed, the terminal uses Bayesian theorem to calculate the posterior probability matrix, and calculates the optimization direction of the registration function, and according to the registration The optimization direction of the function updates the corresponding parameters, and then judges whether the change of the parameters before and after optimization is less than the preset standard, and the preset standard can be adjusted according to the actual situation; Process the image to complete the registration, and output the registered template and the image to be processed; otherwise, continue to use the optimized parameters as the current parameters of the registration function
- the image to be processed and the template input to the registration function are preprocessed data, wherein the preprocessing includes at least one of surface point cloud extraction, point cloud downsampling and normalization, which can speed up the registration process. Quasi-function calculation speed.
- Fig. 11 is a schematic diagram of non-rigid registration in an embodiment.
- M circular points y 1 ... y M are points on a standard bone template
- N triangular points x 1 ... x N is the point on the skeleton image to be processed.
- the probability density of GMM is:
- ⁇ is the probability of outliers.
- the purpose of registration is to maximize the probability of X in GMM by transforming the mean Y of GMM. Assuming that the mean value Y of GMM is transformed by parameter ⁇ , the registration function to be optimized is:
- FIG. 12 is a schematic diagram of a process of optimizing a registration function by using an Expectation-Maximization (EM) algorithm in an embodiment.
- EM Expectation-Maximization
- the standard skeleton template can be registered with the skeleton image to be processed through the registration function to obtain a corresponding registered standard skeleton template.
- the step of determining the target feature corresponding to the standard feature point in the image to be processed includes: when the standard feature When the point is on the surface of the template, obtain the normal vector of the standard feature point in the template after registration with the image to be processed; when there is an intersection point between the normal vector and the image to be processed after registration, calculate the distance between the intersection point and the standard feature point; when When the distance between the intersection point and the standard feature point is less than the preset distance, the intersection point is used as the target feature point; when there is no intersection point or the distance between the intersection point and the standard feature point is greater than the preset distance, the template after registration is selected from the image to be processed The nearest point of the standard feature point in is used as the target feature point.
- the template Before the template is registered with the image to be processed, it is first judged whether the standard feature points are on the surface of the standard bone template. Afterwards, determine the normal vector of the template after registration and the standard feature point, and make a straight line along the normal vector, then judge whether there is an intersection point between the line and the image to be processed, and calculate the distance between the intersection point and the standard feature point if there is an intersection point. Whether the distance is less than the preset distance, and then perform different operations according to different situations to extract the target feature points in the image to be processed. Wherein, the preset distance can be adjusted according to actual application scenarios.
- Figure 13 is a schematic diagram of the principle of surface feature point extraction in an embodiment
- Figure 13 shows that there is an intersection between the normal vector of the standard bone template and the bone image to be processed, and the intersection point and the standard The case where the bone feature points are less than the preset distance.
- the point P represents the standard bone feature point in the standard bone template after registration.
- the two bone feature points P i and P j determine the relationship between P i and P j and the standard bone template normal vector and make a straight line along the normal vector, and calculate the distance between the intersection point of the straight line and the skeleton image to be processed and the standard bone feature point; if the distance between the intersection point of the skeleton image to be processed and the standard bone feature point is less than the preset If the distance is set, then the intersection point between the straight line and the skeleton image to be processed, that is, the triangle in the figure, is determined as the skeleton feature points Pi' and Pj' of the skeleton image to be processed.
- FIG. 14 is a schematic diagram of the principle of surface feature point extraction in another embodiment.
- FIG. 14 shows the situation where there is no intersection point between the normal vector of the standard bone feature point and the standard bone template and the bone image to be processed.
- the bone feature point P k as an example, there is no intersection between P k and the normal vector of the standard bone template and the bone image to be processed, so select the closest point between the registered standard bone feature point and the bone image to be processed as the point to be processed
- the bone feature point of the bone image that is, P k ' at the position of the triangle in the figure.
- the step of determining the target feature point corresponding to the standard feature point in the image to be processed further includes: when When the standard feature points are not on the template surface, select a preset number of points from the template surface according to the standard feature points as the associated points; according to the registration relationship between the template and the image to be processed, determine the corresponding target points of the associated points in the registration image; target point calculates the target feature point of the image to be processed.
- the standard feature points are not on the surface of the template, it is first necessary to select the points on the nearby surface as the associated points according to the structural feature points around the standard feature points.
- the associated points refer to the points on the surface of the template that can reflect the standard feature points inside the template , for example, the lower extremity bone as an example, the sphere center fitted by the bone association point is the bone feature point inside the bone; after the association point is determined, the template is registered with the image to be processed, and the association point
- the position of the corresponding target point, the position of the target point can be determined by referring to when the standard feature point is on the surface of the template, and then calculate the target feature point of the image to be processed according to the position of the target point.
- the target point A sphere is fitted, and the center of the sphere is the bone feature point of the bone image to be processed.
- FIG. 15 is a schematic diagram of the principle diagram of non-surface bone feature point extraction in an embodiment.
- the left picture is a standard bone template
- the right picture is a registered standard bone template.
- the standard bone template is passed Non-rigid registration obtains the registered standard bone template. Taking the center point C of the femoral head as an example, select N points P 1 ...P N on the nearby surface as the associated points on the standard bone template, and these N points P 1 ...P N can fit the center point C of the sphere.
- the standard feature points of the image to be processed that are not on the surface can be obtained by selecting the feature points of the surface near the standard feature points as the associated points and obtaining the standard feature points that are not on the surface through the registered associated points.
- the puzzle of feature points can be obtained by selecting the feature points of the surface near the standard feature points as the associated points and obtaining the standard feature points that are not on the surface through the registered associated points.
- the image to be processed is a bone image to be processed
- the standard feature points are bone feature points
- the bone feature points include at least one of femoral feature points and tibial feature points.
- the bone image to be processed is obtained by three-dimensional reconstruction of the three-dimensional image collected by the medical imaging equipment on the patient's bone, and the standard feature points are the anatomical feature points on the pre-generated standard bone template. Taking the lower limb bone as an example, continue to combine Fig.
- distal point 5 of the lateral femoral condyle, the distal point 6 of the medial femoral condyle, the rear end point 7 of the lateral femoral condyle and the posterior point 8 of the medial femoral condyle are characteristic points of the femoral side;
- the lateral side of the tibial plateau 9 and the medial side of the tibial plateau 10 are It is a feature point on the tibial side and is a reference point for measuring the amount of osteotomy during knee replacement.
- the skeleton data processing method includes: acquiring the skeleton image to be processed; processing the skeleton image to be processed according to the method for extracting the target feature in any one of the above embodiments to obtain the skeleton feature points; to process.
- FIG. 16 is a schematic flowchart of a bone data processing method in an embodiment.
- the terminal first acquires a bone image to be processed, wherein the bone image to be processed is a three-dimensional image collected by a medical imaging device on a patient's bone and obtained through 3D reconstruction.
- the step of processing the skeleton image to be processed to obtain the skeleton feature points according to the method for extracting the target feature includes: after the terminal obtains the patient skeleton image, it queries the standard skeleton template corresponding to the patient skeleton image and obtains the standard skeleton template.
- the standard bone template can be a bone including any part of the human body, and its generation method can be generated according to the above-mentioned template generation method, and the standard bone feature points can be manually obtained by the doctor from the standard bone template Selected; the terminal registers the standard bone template with the patient’s bone image to obtain the registered standard bone template, that is, the deformed standard bone template.
- the registration method can be but not limited to non-rigid registration algorithm; according to the standard feature points in The positions in the template are combined with the deformed standard bone template to determine the bone feature points corresponding to the patient’s bone image and the standard bone feature points; finally, the terminal processes the obtained bone feature points according to preset rules to obtain optimized bone features points, thereby improving the position accuracy of bone feature points.
- the standard bone feature points on the standard bone template can be mapped to the patient's bone, realizing automatic extraction of the position of the patient's bone feature points.
- processing the bone feature points according to preset rules includes: calculating at least one of the femoral mechanical axis, the femoral condylar line and the tibial mechanical axis according to the bone feature points.
- femoral mechanical axis can be determined according to the hip joint center 1 and the femoral intercondylar recess 4; the femoral condyle line can be determined according to the lateral femoral condyle 2 and the medial femoral condyle 3; according to the tibial spine 11 and the middle of the ankle Point 15 may determine the tibial mechanical axis.
- the physiological axes of the lower limbs corresponding to the bone feature points can be calculated through the bone feature points, and these physiological axes can further determine the placement angle of the joint prosthesis.
- the step of processing the skeletal feature points according to preset rules further includes: calculating the placement angle of the joint prosthesis according to the mechanical axis of the femur, the transcondylar line of the femur, and the mechanical axis of the tibia.
- the step of processing the bone feature points according to preset rules further includes: optimizing the bone feature points according to preset rules.
- the bone feature points to be processed obtained by any one of the above embodiments may not be very accurate, so it is necessary to further optimize the obtained bone feature points to improve the bone feature points.
- the bone feature points can be projected onto the corresponding physiological axis, and the projection points within a certain range on the physiological axis can be selected as the optimized bone feature points in the image to be processed.
- Figure 17 is a schematic diagram of feature point position optimization in an embodiment, where the distal tangent point of the femur is taken as an example, after registration is performed to obtain the corresponding bone feature point position, according to the definition,
- the bone feature points are projected onto the femoral mechanical axis, and the point projected on the femoral mechanical axis as the most distal point is selected as the optimized bone feature point within a certain range.
- the two distal tangent points are the optimized bone feature points.
- the connecting line is tangent to the femur.
- a target feature point extraction device including: a data acquisition module 100, a template query module 200, a registration module 300 and a target extraction module 400, wherein:
- the data acquisition module 100 is configured to acquire images to be processed.
- the template query module 200 is configured to obtain a template corresponding to the image to be processed, and obtain a positional relationship between the template and target feature points; wherein the template is an image generated based on a sample image, and the standard feature points are located in the template.
- Registration module 300 configured to register the template with the image to be processed.
- the target extraction module 400 is configured to determine target feature points corresponding to standard feature points in the image to be processed according to the positional relationship between the template and the target feature point, combined with the positional relationship between the registered template and the image to be processed.
- the above-mentioned target feature point extraction device may also include:
- the sample acquisition template is used to acquire several sample images.
- the sample registration module is configured to select an initial template from several sample images, and respectively register the initial template and the remaining sample images to obtain a registered image.
- the statistical image calculation module is used to calculate the statistical image corresponding to the registration image.
- the similarity judging module is used to use the statistical image as a template when the similarity between the statistical image and the initial template meets the requirements; otherwise, use the statistical image as a new initial template, and return to pair the initial template with the remaining sample images respectively.
- the step of obtaining the registration image is obtained by aligning until the similarity between the statistical image and the initial template meets the requirements.
- the above-mentioned statistical image calculation module may include:
- a position acquiring unit configured to acquire an initial position of a corresponding point in each registered image.
- the statistical image generation unit is used to calculate the average value of each initial position of each corresponding point as the target position of the corresponding point, and generate a statistical image according to the target position of the corresponding point.
- the above-mentioned target feature point extraction device may also include:
- the instruction obtaining module is used to receive the standard feature point configuration instruction for the template.
- the feature acquisition module is configured to configure corresponding standard feature points in the template according to the standard feature point configuration instruction.
- the above-mentioned target feature point extraction device may also include:
- the corresponding point distance calculation module is used to calculate the distance between the statistical image and the corresponding point in the initial template.
- the similarity calculation module is used to calculate the similarity between the statistical image and the initial template according to the distances of all corresponding points.
- the above-mentioned target feature point extraction device may also include:
- the first preprocessing module is used for preprocessing the image to be processed, and the preprocessing includes at least one of a surface point cloud extraction unit, a point cloud downsampling unit, and a normalization unit.
- the second preprocessing module is used to preprocess the sample image, and the preprocessing includes at least one of a surface point cloud extraction unit, a point cloud downsampling unit, and a normalization unit.
- the surface point cloud extraction unit is used to extract vertices of all grids in the image to be processed and the sample image to obtain the surface point cloud.
- the point cloud down-sampling unit is configured to divide the image to be processed into at least one processing area, and sample the point closest to the center of the processing area in the processing area as the sampling point of the processing area. .
- the normalization unit is used to align the points in the image to be processed to the same coordinate space.
- the registration module 300 may further include:
- the registration function obtaining unit is used to obtain the registration function and initialize the registration function.
- the registration function optimization unit is used for inputting the image to be processed and the template into the registration function so as to optimize the parameters in the registration function.
- the registration function judging unit is used for judging that the registration between the template and the image to be processed is completed when the variation of the parameters of the registration function after optimization and before optimization is less than the preset standard; otherwise, continue to input the image to be processed and the template to parameter optimization In the subsequent registration function, the parameters in the registration function are optimized.
- the above-mentioned target extraction module 400 also includes:
- the normal vector obtaining unit is used to obtain the normal vector of the standard feature point in the template after registration with the image to be processed when the standard feature point is on the surface of the template.
- the distance calculation unit is used to calculate the distance between the intersection point and the standard feature point when there is an intersection point between the normal vector and the registered skeleton image to be processed.
- the first bone feature determination unit is configured to use the intersection point as the target feature point when the distance between the intersection point and the standard feature point is less than a preset distance.
- the second skeletal feature determination unit is used to select the point closest to the standard feature point in the template after registration from the image to be processed as the target feature point when there is no intersection point or the distance between the intersection point and the standard feature point is greater than the preset distance .
- the above-mentioned target extraction module 400 also includes:
- the associated point acquisition unit is used to select a preset number of points from the surface of the template according to the standard feature points as associated points when the standard feature points are not on the surface of the standard plate.
- Target point acquisition unit used to determine the target point of the associated point in the registered image according to the registration relationship between the template and the image to be processed.
- the third bone feature determination unit is used to calculate the target feature point of the image to be processed according to the target point.
- the above-mentioned target feature point extraction device also includes:
- a standard bone feature acquisition module configured to acquire at least one of femoral feature points and tibial feature points.
- a bone data processing device including: a bone image acquisition module 500, a bone feature extraction module 600, and a bone feature processing module 700, wherein:
- the bone image acquiring module 500 is used to acquire the bone image to be processed.
- the skeleton feature extraction module 600 is configured to obtain the skeleton feature points by processing the skeleton image to be processed according to the target feature point extraction device in any one of the above embodiments.
- the skeleton feature processing module 700 processes the skeleton feature points according to preset rules.
- the above-mentioned skeleton feature processing module 700 also includes:
- the axis calculation unit is used to calculate at least one of the femoral mechanical axis, the femoral condylar line and the tibial mechanical axis according to the bone feature points.
- the above-mentioned bone feature extraction module 600 also includes:
- the bone feature optimization unit is used to optimize the bone feature points according to preset rules.
- Each module in the above target feature point extraction device can be fully or partially realized by software, hardware and combinations thereof.
- the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
- a computer device is provided.
- the computer device may be a terminal, and its internal structure may be as shown in FIG. 20 .
- the computer device includes a processor, a memory, a communication interface, a display screen and an input device connected through a system bus. Wherein, the processor of the computer device is used to provide calculation and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system and computer programs.
- the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
- the communication interface of the computer device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, an operator network, NFC (Near Field Communication) or other technologies.
- a target feature point extraction method is realized.
- the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen
- the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the computer device , and can also be an external keyboard, touchpad, or mouse.
- Figure 20 is only a block diagram of a partial structure related to the solution of this application, and does not constitute a limitation on the computer equipment on which the solution of this application is applied.
- the specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
- a computer device including a memory and a processor, where a computer program is stored in the memory, and the processor implements the steps in the above method embodiments when executing the computer program.
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
- a computer program product including a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
- Non-volatile memory can include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
- Volatile memory can include Random Access Memory (RAM) or external cache memory.
- RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
本申请涉及一种目标特征点提取方法、装置、计算机设备和存储介质。所述方法包括:获取待处理图像;获取与所述待处理图像对应的模板,并获取所述模板与标准特征点的位置关系;所述模板为基于样本图像生成的图像,且所述标准特征点位于所述模板中;将所述模板向所述待处理图像进行配准;根据所述模板与所述标准特征点的位置关系,结合配准的后所述模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点。
Description
相关申请
本申请要求2021年11月5日申请的,申请号为202111307112.8,名称为“目标特征点提取方法、装置、计算机设备和存储介质”的中国专利申请的优先权,在此将其全文引入作为参考。
本申请涉及图像处理技术领域,特别是涉及一种目标特征点提取方法、装置、计算机设备和存储介质。
随着图像处理技术的发展,对于CT或者MR图像中的骨骼解剖特征点的自动提取,可广泛应用于基于医学影像的辅助诊断和辅助治疗的应用场景。与此同时,人们对于医疗服务的质量要求日益提高,骨骼手术精准化与数字化已成为全球医学发展的趋势,因此通过特征点的自动提取可辅助医生进行手术规划,提高手术效率,也有利于医疗资源相对匮乏地区的患者享受更优质的手术效果。
而传统技术中,骨骼解剖特征点通常由有经验的医生手动完成,由于骨骼解剖特征点位置的选取是术前规划的关键步骤,因此在此过程中对医生的解剖学、影像学知识,以及临床经验要求较高,且手动获取特征点需要耗费医生大量的时间和精力,并且操作复杂。
发明内容
基于此,有必要针对上述技术问题,提供一种能够自动定位的目标特征点提取方法、装置、计算机设备和存储介质。
第一方面,本申请提供了一种目标特征点提取方法,包括:
获取待处理图像;
获取与所述待处理图像对应的模板,并获取所述模板与标准特征点的位置关系;所述模板为基于样本图像生成的图像,且所述标准特征点位于所述模板中;
将所述模板与所述待处理图像进行配准;
根据所述模板与所述标准特征点的位置关系,结合配准后的所述模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点。
在其中一个实施例中,在所述获取与所述待处理图像对应的模板之前,所述方法还包括生成所述模板;
其中所述生成所述模板包括:
获取若干样本图像;
从若干所述样本图像中选取初始模板,并将所述初始模板与剩余的所述样本图像分别进行配准得到配准图像;
计算所述配准图像对应的统计图像;
当所述统计图像和所述初始模板的相似度满足要求时,将所述统计图像作为模板,否则,将所述统计图像作为新的初始模板,并返回将所述初始模板与剩余的所述样本图像分别进行配准得到配准图像的步骤,直至计算的所述统计图像和所述初始模板的相似度满足要求。
在其中一个实施例中,所述计算所述配准图像对应的统计图像,包括:
获取各所述配准图像中对应点的初始位置;
计算对应点的各所述初始位置的平均值作为对应点的目标位置,并根据对应点所述目标位置生成所述统计图像。
在其中一个实施例中,在所述生成所述模板之后,所述方法还包括:
接收针对所述模板的标准特征点配置指令;
根据所述标准特征点配置指令在所述模板中配置对应的标准特征点。
在其中一个实施例中,在所述计算所述配准图像对应的统计图像之后,所述方法还包括:
计算所述统计图像与所述初始模板中对应点的距离;
根据所有对应点的距离计算得到所述统计图像与所述初始模板的相似度。
在其中一个实施例中,所述待处理图像与所述样本图样为三维网格点云图像;所述将所述初始模板与剩余的所述样本图像进行配准之前,所述方法还包括:对所述预处理图像进行预处理;和/或
在所述将所述模板与所述待处理图像进行配准之前,所述方法还包括:
对所述样本图像进行预处理;所述预处理包括提取表面点云、点云降采样、归一化中的至少一个;
所述提取表面点云为提取所述待处理图像与所述样本图像中所有网格的顶点,得到表面点云;
所述点云降采样为将所述待处理图像划分为至少一个处理区域,并将所述处理区域中到待处理中心距离最近的点采样为所述处理区域的采样点;所述归一化为将所述待处理图像中的点对齐到同一坐标空间。
在其中一个实施例中,所述将所述模板与所述待处理图像进行配准包括:
获取配准函数,并初始化所述配准函数;
将所述待处理图像和所述模板输入至所述配准函数中以对所述配准函数中参数进行优化;
当所述配准函数的参数优化后与优化前的变化量小于预设标准时,判定所述模板与所述待处理图像完成配准,否则继续将所述待处理图像和所述模板输入至参数优化后的所述配准函数中以对所述配准函数中参数进行优化。
在一个实施例中,所述根据所述模板与所述标准特征点的位置关系,结合配准后的所述模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点包括:
当所述标准特征点在所述模板表面时,获取与所述待处理图像配准后所述模板中的所述标准特征点的法向量;
当所述法向量与配准后的所述待处理图像存在交点时,则计算所述交点与所述标准特征点的距离;
当所述交点与所述标准特征点的距离小于预设距离时,则将所述交点作为目标特征点;
当不存在所述交点或者所述交点与所述标准特征点的距离大于所述预设距离,则从所述待处理图像中选取与配准后模板中的所述标准特征点最近的点作为目标特征点。
在一个实施例中,所述根据所述模板与所述标准特征点的位置关系,结合配准后的所述模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点还包括:
当所述标准特征点不在所述标准板表面时,根据所述标准特征点从所述模板表面选取预设数量的点作为关联点;
根据所述模板与所述待处理图像的配准关系,确定所述关联点在配准图像中的目标点;
根据所述目标点计算所述待处理图像的目标特征点。
在其中一个实施例中,所述待处理图像为待处理骨骼图像,所述标准特征点为骨骼特 征点,所述骨骼特征点包括股骨特征点和胫骨特征点中的至少一个。
一种骨骼数据处理方法,包括:
获取待处理骨骼图像;
根据上述任一实施例中的目标特征点提取方法对所述待处理骨骼图像进行处理得到骨骼特征点;
根据预设规则对所述骨骼特征点进行处理。
在其中一个实施例中,所述根据预设规则对所述骨骼特征点进行处理包括:
根据所述骨骼特征点计算得到股骨机械轴线、股骨通髁线以及胫骨机械轴线中的至少一个。
在其中一个实施例中,所述根据预设规则对所述骨骼特征点进行处理包括:
根据预设规则对所述骨骼特征点进行优化。
第二方面,本申请还提供了一种目标特征点提取装置,包括:
数据获取模块,用于获取待处理图像;
模板查询模块,用于获取与所述待处理图像对应的模板,并获取所述模板与目标特征点的位置关系;
配准模块,用于将所述模板与所述待处理图像进行配准;
目标提取模块,用于根据所述模板与所述目标特征点的位置关系,结合配准后所述模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点。
第三方面,本申请还提供了一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述的方法的步骤。
第四方面,本申请还提供了一种计算机可读存储介质。所述计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的方法的步骤。
第五方面,本申请还提供了一种计算机程序产品。所述计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述的方法的步骤。
上述目标特征点提取方法、装置、计算机设备和存储介质,能够将模板与待处理图像进行配准,并根据模板与标准特征点的位置关系以及结合配准后模板与待处理图像的位置关系,从待处理图像中提取目标特征点,实现目标特征点的自动提取,从而提高特征点获取效率。
为了更清楚地说明本申请实施例或传统技术中的技术方案,下面将对实施例或传统技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例中目标特征点提取方法的应用环境图。
图2为本申请一实施例中目标特征点提取方法的流程示意图。
图3为本申请一实施例中下肢骨进行分割与表面重建示意图。
图4为本申请另一实施例中模板生成方式的流程示意图。
图5为本申请一实施例中统计骨骼图像获取示意图。
图6为本申请一实施例中标准特征点配置示意图。
图7为本申请一实施例中两张骨骼图像比较相似度的示意图。
图8为本申请一实施例中点云降采样方法示意图。
图9为本申请一实施例中数据预处理示意图。
图10为本申请一实施例中骨骼点云数据归一化的示意图。
图11为本申请一实施例中非刚性配准示意图。
图12为本申请一实施例中采用Expectation-Maximization(EM)算法优化配准函数过程的示意图。
图13为本申请一实施例中表面特征点提取原理示意图。
图14为本申请另一实施例中表面特征点提取原理示意图。
图15为本申请一实施例中非表面骨骼特征点提取原理图示意图。
图16为本申请一实施例中骨骼数据处理方法的流程示意图。
图17为本申请一实施例中特征点位置优化的示意图。
图18为本申请一实施例中目标特征点提取装置的结构框图。
图19为本申请一实施例中骨骼数据处理装置的结构框图。
图20为本申请一实施例中计算机设备的内部结构图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的目标特征点提取方法,可以应用于如图1所示的应用环境中。其中,终端102通过网络与医学成像设备104进行通信。其中,终端102可以接收到医学成像设备104扫描得到的按照三维矩阵存储方式的三维影像,并对该三维影像进行三维重建得到待处理图像,进而获取预先生成的与待处理图像对应的模板;将模板和待处理图像进行配准,根据模板与标准特征点的位置关系,并结合配准后的模板和待处理图像之间的位置关系,确定待处理图像中与标准特征点对应的目标特征点。由于模板中包括标准特征点,因此在配准之后,可以根据配准后的模板通过映射关系自动提取待处理图像中与标准特征点相对应的目标特征点,从而不需要人工手动从待处理图像中提取目标特征点,节省了大量的时间,提高了效率。
其中,终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备以及医学成像设备本身的功能模块和专用电路。医学成像设备104包括但不限于各种成像设备,例如CT成像设备(CT:Computed Tomography,它是利用精确准直的X线束与灵敏度极高的探测器一同围绕人体的某一个部位做一个接一个的断面扫描,并且通过CT扫描可以重建出肿瘤等的精确三维位置图像)、磁共振设备(其是断层成像的一种,它利用磁共振现象从人体中获得电磁信号,并重建出人体信息图像)、正电子发射型计算机断层显像(Positron Emission Computed Tomography)设备、正电子发射型磁共振成像系统(PET/MR)等。
在一个实施例中,如图2所示,提供了一种目标特征点提取方法,以该方法应用于图1中的终端为例进行说明,包括以下步骤:
S202:获取待处理图像。
具体地,在一个实施例中,待处理图像为三维表面网格数据,其可以是对医学成像设备所采集的三维影像进行三维重建得到的。在其他实施例中,当医学成像设备所采集的为三维表面网格数据时,则无需对其再进行三维重建。
其中,在医学影像领域里,三维扫描的CT或MR等医学影像数据一般是三维影像,即以三维矩阵的形式存储的医学影像数据;其中该三维影像中包括待处理目标,例如目标特征点所在的目标骨骼或器官。
具体地,三维重建具体可以包括:首先终端通过图像分割技术对三维矩阵中的待处理目标进行图像分割得到以三维矩阵形式存储的掩膜数据,然后对该掩膜数据进行三维重建得到待处理图像。其中图像分割技术包括但不限于基于深度学习全卷积网络的图像分割技术,或基于传统机器学习(比如随机森林等),或基于聚类、区域生长、活动轮廓、水平集、阈值法等分割技术。对掩膜数据进行三维重建的方法包括但不限于Marching Cube算法、 在轮廓附近根据表面阈值使用Marching Cube算法进行插值重建以及泊松表面重建算法等。具体地,图3为一个实施例中下肢骨进行分割与表面重建示意图。其中,终端使用图像分割技术对CT影像进行分割操作,得到截断面、矢状面和冠状面的骨骼数据,即将待处理图像中的骨骼像素提取出来,然后通过表面重建方法将分割得到的骨骼像素的表面通过网格数据的形式表达出来,即可得到待处理骨骼图像。该待处理骨骼图像可用于后续的配准。
S204:获取与待处理图像对应的模板,并获取模板与标准特征点的位置关系;模板为基于样本图像生成的图像,且标准特征点位于模板中。
具体地,模板是预先基于样本图像生成的,其用于表征与待处理图像对应的骨骼或器官的标准形态。该模板可以是根据术前所采集的用户的样本图像生成的,或者是根据大量的不同用户的样本图像生成的适合大量用户的模板,例如根据样本图像的平均图像来得到等,这样不需要每个用户术前都进行模板的生成。
标准特征点是在模板中所选择的骨骼或器官的特征点,其中该标准特征点可以是医生等手动在模板中进行选择的,这里需要说明的一点是特征线、特征面和特征区域都可以看做是由特征点组成的。该标准特征点是与需要从待处理图像中目标特征点相对应的。
其中模板与标准特征点的位置关系则是用于表征标准特征点在模板中的位置的数据,其中该位置关系可以是在模板所在的图像坐标系中所确定的。可选地,该模板以及标准特征点与模板的位置关系是预先生成的。在其他的实施例中,当需要增加新的标准特征点时,也可以实时在模板中标定新的标准特征点,在此不做具体的限定。
可选地,模板数据在存储的时候可以按照骨骼的类型进行存储,这样在获取到待处理图像后,可以根据待处理图像对应的骨骼类型选择已经存储的对应的模板。
S206:将模板与待处理图像进行配准。
在一个实施例中,这里的配准是指表面配准,将模板中的三维表面网格数据和待处理图像的三维表面网格数据统一到同一个坐标系下。通过配准即可以将模板中的三维表面网格数据的位置与待处理图像的三维表面网格数据的位置实现一一对应,从而为获取待处理图像中的目标特征点奠定基础。其中表面配准可以包括但不限于非刚性配准算法。
S208:根据模板与标准特征点的位置关系,结合配准后的模板与待处理图像的位置关系,确定待处理图像中与标准特征点对应的目标特征点。
其中,目标特征点是指与待处理图像配准后,模板中标准特征点映射到待处理图像上的特征点,例如将模板与待处理图像进行配准,从而模板中的网格与待处理图像中的网格一一对应,这样模板中的标准特征点则在待处理图像中也对应有目标特征点,该目标特征点就是所要提取的特征点。
需要说明的一点是,标准特征点的数量在此不做限定,在一次目标特征点提取中,终端可以并行提取多个标准特征点分别对应的目标特征点,从而可以提高目标特征点的提取的效率。
在其他的实施例中,在获取到目标特征点后,终端还可以将目标特征点输出,以便于医生等进行检查。当医生确认所提取的目标特征点时,则所提取的目标特征点正确;若是存在问题,则可以接收针对目标特征点的调整指令,根据该调整指令对目标特征点进行微调,以保证所输出的目标特征点的准确性。
在上述实施例中,能够将模板与待处理图像进行配准,并根据模板与标准特征点的位置关系以及结合配准后模板与待处理图像的位置关系,从待处理图像中提取目标特征点,实现目标特征点的自动提取,以提高效率。
在一个实施例中,如图4所示,一个实施例中的模板生成方式的流程图,该模板的生成方式可以包括:
S402:获取若干样本图像。
具体地,在一个实施例中,样本图像为三维表面网格数据,其可以是对医学成像设备所采集的三维影像进行重建得到的,具体的三维重建的方法可以参见上文所述。其中仍以 下肢骨为例进行说明,其中首先终端收集大量的不同的患者的下肢骨医学影像数据,作为训练集,然后对训练集中的医学影像数据按照上述三维重建的方法进行分割并重建得到样本图像。
S404:从若干样本图像中选取初始模板。
具体地,初始模板可以是从若干样本图像中随意选取任意一张。需要说明的是,当样本图样仅有一套时,则直接将其作为模板;若是样本图像至少存在两套时,则从样本图像中选取任意一张样本图像作为初始模板。
S406:将初始模板与剩余的样本图像分别进行配准得到配准图像。
具体地,配准图像是利用配准算法将初始模板向剩余的样本图像中配准得到的,例如利用非刚性配准算法将初始模板映射至其他剩余的样本图像中得到配准图像。
S408:计算配准图像对应的统计图像。
其中,统计图像是由配准图像根据一定规则计算得到的,例如对配准图像中点的位置求平均、求最值、求中值等等得到的。统计图像可以作为反映此次配准总体情况的代表,在一个实施例中,统计图像可用于后续与初始模板进行相似度比较,进一步得到统计图像与初始模板之间的相似度。
S410:当统计图像和初始模板的相似度满足要求时,将统计图像作为模板;否则,将统计图像作为新的初始模板,并返回将初始模板与剩余的样本图像分别进行配准得到配准图像的步骤,直至统计图像和初始模板的相似度满足要求。
具体地,相似度是体现统计图像与初始模板之间相似度的量化数值。统计图像与初始模板之间的相似度越高,表示统计图像与初始模板之间越相似;反之则表示越不相似。其中相似度可以是根据统计图像与初始模板中对应点的距离计算得到的。
具体地,统计图像和初始模板的相似度满足要求是指统计图像和初始模板之间的相似度大于等于预设阈值,此时终端认为统计图像与初始模板之间足够相似,则将统计图像作为样本图像对应的模板,例如样本图像为患者下肢骨图像,模板则为下肢骨模板。若统计图像和初始模板之间的相似度小于预设阈值,则将当前获得的统计图像作为下一次迭代的初始模板,并继续将当前初始模板与样本图像进行配准得到配准图像,并计算配准图像对应的统计图像,直至统计图像与初始模板之间的相似度大于等于预设阈值,得到最终的模板。其中,相似度的预设阈值可以根据实际情况进行调整。
在上述实施例中,通过将样本图像与初始模板不停迭代进行配准,并通过计算配准图像对应的统计图像以及比较统计图像与初始模板之间的相似度来得到模板,这样获得的模板更加真实、准确,能够为后续通过待处理图像与标准图像之间进行配准并获得目标特征点奠定良好的基础。
在一个实施例中,计算配准图像对应的统计图像的步骤包括:获取各配准图像中对应点的初始位置;计算对应点的各初始位置的平均值作为对应点的目标位置,并根据对应点的目标位置生成统计图像。
具体地,获取配准图像中对应的点的初始位置是指训练集中的样本图像与初始模板进行配准后,终端可以获取配准图像中网格数据相应的点的位置,然后对点的位置求平均值得到平均位置,并根据所有点的平均位置生成统计图像。
具体地,结合图5所示,图5为一个实施例中统计骨骼图像获取示意图,图5中的训练集存在N张待处理骨骼图像,其中,该待处理骨骼图像是由医学成像设备所采集的三维影像并进行三维重建得到的。首先,终端在训练集中任意选取一张待处理骨骼图像作为初始骨骼模板,再通过表面配准算法可以将初始骨骼模板与剩余N-1张待处理骨骼图像进行配准,得到N-1张配准骨骼图像,其中图5中以实线表示相应的初始骨骼模板上的点P(i)映射为P
1(i)…P
N-1(i),i=1,2,3…,P
N-1(i)是指配准骨骼图像中网格数据对应点,然后对一组对应点P
1(i)…P
N-1(i),求其平均值P’(i)=(P
1(i)+P
2(i)+…+P
N-1(i))/(N-1),并根据P’(i)生成相应的统计骨骼图像。
在上述实施例中,通过配准图像中网格数据相应的点进行计算,可以准确地得到对应的统计图像。
在一个实施例中,在生成模板之后,该目标特征点提取方法还包括:接收针对模板的标准特征点配置指令;根据标准特征点配置指令在模板中配置对应的标准特征点。
具体地,标准特征点配置指令是用于在模板上获取标准特征点的计算机指令,其可以是由用户根据应用场景输入的,例如标准特征点配置指令可以是由医生在标准骨骼模板中进行选择解剖特征点、特征面或者特征线的指令;根据配置指令在模板中配置对应的标准特征点,具体地是指终端接收标准特征点配置指令后,根据标准特征点配置指令在模板上对相应的特征点、特征面或者特征线进行标注,例如在股骨上对股骨外侧髁远端点和股骨内侧髁远端点等特征点进行标注。
具体地,结合图6所示,图6为一个实施例中标准特征点配置示意图,其标准特征点配置指令是针对标准下肢骨中的解剖特征点配置特征点;终端根据标准特征点配置指令在标准下肢骨中配置的相应的解剖特征点包括髋关节中心1、股骨外侧髁2、股骨内侧髁3、股骨髁间凹4、股骨外侧髁远端点5、股骨内侧髁远端点6、股骨外侧髁后端点7、股骨内侧髁后端点8、胫骨平台外侧9、胫骨平台内侧10、胫骨棘11、胫骨结节12、踝外侧13、踝内侧14和踝中点15中的任意一个或多个。
在上述实施例中,通过标准特征点配置指令,可在模板上得到所需的特征,这些特征点可用于后续确定待处理图像中相应的特征点。
在一个实施例中,在计算配准图像对应的统计图像之后,该目标特征点提取方法还包括:计算统计图像与初始模板中对应点的距离;根据所有对应点的距离计算得到统计图像与初始模板的相似度。
具体地,终端首先计算统计图像与初始模板中每一组网格数据相应的点之间的距离,并根据每一组网格数据相应的点之间的距离计算统计图像与初始模板的相似度。在其他实施例中,相似度可以表示为所有对应点之间平均距离的倒数。结合图7所示,图7为一个实施例中两张骨骼图像相似度的示意图,终端计算其中每一个点P’(i)与初始骨骼模板中对应的点P(i)的距离d
i,及相应的相似度,其中相似度可表示为:
其中m为初始骨骼模板所包含点的个数,d
m表示为第m个对应点之间的距离。若统计骨骼图像与初始骨骼模板每一组网格数据相应的点之间的平均距离越小,则表示统计骨骼图像与初始骨骼模板之间的相似度越大。当相似度大于某一阈值时,则认为统计骨骼图像与标准骨骼模板足够相似,此时得到统计骨骼图像即可作为标准模型。其中,相似度的阈值可以根据实际情况进行调整。
在上述实施例中,通过计算统计图像与初始模板之间的相似度可以准确地得到样本图像对应的标准模型。
在一个实施例中,待处理图像与样本图像为三维网格点云图像;在将初始模板与剩余的样本图像进行配准之前,该目标特征点提取方法还包括:对待处理图像进行预处理;和或在将模板与待处理图像进行配准之前,该目标特征点提取方法还包括:对样本图像进行预处理;该预处理包括提取表面点云、点云降采样、归一化中的至少一个;其中提取表面点云为提取待处理图像与样本图像中所有网格的顶点,得到表面点云;点云降采样为将待处理图像划分为至少一个处理区域,并将该处理区域中到待处理区域中心距离最近的点采样为所述处理区域的采样点;归一化为将待处理图像中的点对齐到同一坐标空间。
在本实施例中,该处理区域是指按照预设间距对整个空间进行等距划分,预设距离可以根据实际的应用场景进行划分。可选地,若预设间距为L,则可以将整个空间划分为若干个间距为L的处理区域。采样点是指按照预设规则从待处理图像中选取的点,例如采样 点可以是对待处理图像进行处理区域的划分,从处理区域中选取离处理区域中心距离最近的点获得的。
具体地,提取表面点云是指提取所有网格顶点,即可得到待处理图像和/或样本图像的表面点云。具体地,点云降采样是指在输入点云所在的空间中按照一定的间距L将整个空间划分为若干个小的立方体空间即处理区域,在每个小立方体空间中可能包括待处理图像与样本图像表面点云的点,也可能不包含。若包括表面点云的点的小立方体空间中只含有一个点时则直接保留,否则就计算每个点到小立方体空间中心点的距离,只保留距离中心最近的点作为采样点,其余点去除,最后得到一个空间分布与原点云基本相同但点数更少的稀疏点云。结合图8所示,图8为一个实施例中点云降采样方法示意图,其中实心的点为离网格中心最近的点,空心的点为其他的点,在经过点云降采样后只留下离立方体中心距离最近的点即图中实心的点。在另一个实施例中,结合图9所示,图9为一个实施例中数据预处理示意图,标准骨骼模板和患者骨骼图像在经过提取表面点云和点云降采样后即可得到稀疏点云。
具体地,归一化是指将样本图样与待处理图像转换到同一坐标下,使得后续数据处理更加便捷。例如若是所有的样本图像、待处理图像和其相应的模板不是处于同一位置视角拍摄的,即所有的样本图像、待处理图像和相应的模板不是在同一坐标空间的,则优先将所有样本图像、待处理图像和其对应的模板对齐到同一坐标空间,例如将采样点对齐到同一坐标空间。在其他实施例中,对样本图像与待处理图像的归一化处理中,首先计算所有样本图像、待处理图像和其相应的模板的质心坐标C(所有点的中心位置),再将点云平移-C,使其质心与坐标系原点重合,然后计算平移后点云坐标的方差Var,将点云中每个点的坐标除以
即得到归一化的点云数据。结合图10所示,图10为一个实施例中骨骼点云数据归一化的示意图,终端首先将骨骼点云均值调整到0,再将点云方差调整到1即可得到归一化处理后的股骨点云数据。
在上述实施例中,通过对样本图像与待处理图像的预处理可以加速后续配准操作的计算速度与收敛速度。
在一个实施例中,将模板与待处理图像进行配准的步骤包括:获取配准函数,并初始化配准函数;将待处理图像和模板输入至配准函数中以对配准函数中参数进行优化;当配准函数的参数在优化后与优化前的变化量小于预设标准时,判定模板与待处理图像完成配准,否则继续将待处理图像和模板输入至参数优化后的配准函数中以对配准函数中参数进行优化。
具体地,配准函数是指实现模板与待处理图像进行配准的程序,将模板与待处理图像输入配准函数后,可得到配准后的模板。终端首先获取相应的配准函数并初始化配准函数,其中初始化配准函数包括初始化配准函数的参数;将待处理图像和模板输入至配准函数中以对配准函数中的参数进行优化以获得相应的配准图像,具体地终端根据当前配准函数的参数以及输入的模板与待处理图像,利用贝叶斯定理计算后验概率矩阵,并计算配准函数的优化方向,并按照配准函数的优化方向更新相应的参数,然后判断参数优化前与优化后的变化量是否小于预设标准,其中预设标准可以根据实际情况进行调整;如果变化量小于预设标准,则判定模板与待处理图像完成配准,输出配准后的模板和待处理图像;否则就继续将优化的参数作为配准函数的当前参数,再继续上述操作,直至模板与待处理图像完成配准。在其他实施例中,输入配准函数的待处理图像和模板是经过预处理后的数据,其中预处理包括提取表面点云、点云降采样和归一化中的至少一个,这样可以加快配准函数计算速度。
具体地,结合图11所示,图11为一个实施例中非刚性配准示意图,图11中M个圆形点y
1…y
M为标准骨骼模板上的点,N个三角形点x
1…x
N为待处理骨骼图像上的点。其中,以标准骨骼模板中的点组成的点集Y
Mx3=(y
1,…y
M)
T为均值建立高斯混合模型GMM,方差为 σ
2,待处理骨骼图像中的点集X
Nx3=(x
1,…x
N)
T视为由GMM生成。GMM的概率密度为:
如果考虑噪声,即外点,则添加额外的均匀分布,得到
其中,ω为外点的概率。
配准的目的为通过变换GMM的均值Y,使得X在GMM中的概率最大。假设通过参数θ变换GMM的均值Y,则待优化的配准函数为:
具体地,结合图12所示,图12为一个实施例中采用Expectation-Maximization(EM)算法优化配准函数过程的示意图。首先将标准骨骼模板和患者骨骼进行预处理,得到归一化的点云数据;为待优化的配准函数的参数θ和σ设置一个比较合理的初始值;根据当前的参数值和输入的标准骨骼模板和患者骨骼数据,利用贝叶斯定理计算后验概率矩阵;计算本次迭代的配准函数优化方向;按照配准函数的优化方向,更新θ和σ的值;根据参数的变化量是否小于某一阈值判断迭代是否收敛;若收敛或迭代次数达到设置的最大迭代次数,则停止迭代,得到变形后的标准骨骼模板即为配准结果;否则就继续将优化的参数作为配准函数的当前参数,再继续上述操作,直至标准骨骼模板与患者骨骼完成配准。
在上述实施例中,通过配准函数可将标准骨骼模板与待处理骨骼图像进行配准,得到相应的配准后的标准骨骼模板。
在一个实施例中,根据模板与标准特征点的位置关系,结合配准后的模板与待处理图像的位置关系,确定待处理图像中与标准特征点对应的目标特征的步骤包括:当标准特征点在模板表面时,获取与待处理图像配准后模板中的标准特征点的法向量;当法向量与配准后的待处理图像存在交点时,则计算交点与标准特征点的距离;当交点与标准特征点的距离小于预设距离时,则将交点作为目标特征点;当不存在交点或者交点与标准特征点的距离大于预设距离,则从待处理图像中选取与配准后模板中的标准特征点最近的点作为目标特征点。
具体地,在模板与待处理图像进行配准之前,首先判断标准特征点是否在标准骨骼模板表面上,如果标准特征点在模板表面上,则直接进行配准并在模板与待处理图像配准之后,确定配准后模板和标准特征点的法向量,并沿该法向量作一条直线,然后判断该直线与待处理图像是否存在交点及计算在存在交点的情况下该交点与标准特征点的距离是否小于预设距离,再根据不同情况进行不同操作以提取待处理图像中的目标特征点。其中,预设距离可以根据实际应用场景进行调整。
具体地,结合图13所示,图13为一个实施例中表面特征点提取原理示意图,图13所表示的是标准骨骼模板的法向量所在直线与待处理骨骼图像存在交点,且该交点与标准骨骼特征点小于预设距离的情况。在图13中,点P表示的是配准后标准骨骼模板中的标准骨骼特征点,以P
i及P
j这两个骨骼特征点为例,确定P
i及P
j与标准后骨骼模板的法向量并沿该法向量作一条直线,并计算该直线与待处理骨骼图像的交点与标准骨骼特征点之间的距离;若待处理骨骼图像的交点与标准骨骼特征点之间的距离小于预设距离,则将该直 线与待处理骨骼图像的交点即图中的三角确定为待处理骨骼图像的骨骼特征点Pi’及Pj’。
具体地,图14为另一个实施例中表面特征点提取原理示意图,图14所表示的是标准骨骼特征点与标准骨骼模板的法向量所在直线与待处理骨骼图像不存在交点的情况。以P
k这个骨骼特征点为例,P
k与标准骨骼模板的法向量所在直线与待处理骨骼图像不存在交点,则选取配准后标准骨骼特征点与待处理骨骼图像最近的点作为待处理骨骼图像的骨骼特征点,即图中三角所在位置的P
k’。
具体地,当标准骨骼特征点与标准骨骼模板的法向量所在直线与待处理骨骼图像存在交点,且交点与标准骨骼特征点大于预设距离的情况,则选取配准后标准骨骼特征点与待处理骨骼图像最近的点作为待处理骨骼图像的骨骼特征点。
在上述实施例中,通过标准图像与待处理图像配准后,可以根据不同情况进行不同的操作以准确获得待处理图像在表面的目标特征点。
在一个实施例中,根据模板与标准特征点的位置关系,结合配准后的模板与待处理图像的位置关系,确定待处理图像中与标准特征点对应的目标特征点的步骤还包括:当标准特征点不在模板表面时,根据标准特征点从模板表面选取预设数量的点作为关联点;根据模板与待处理图像的配准关系,确定关联点在配准图像中对应的目标点;根据目标点计算待处理图像的目标特征点。
具体地,当标准特征点不在模板表面时,首先需要根据标准特征点周围结构特征点选择附近表面的点作为关联点,关联点是指在模板表面并可以反应在模板内部的标准特征点的点,例如以下肢骨为例,骨骼关联点所拟合出来的球心为在骨骼内部的骨骼特征点;在确定完关联点之后,再将模板与待处理图像进行配准,并得到关联点所对应的目标点的位置,该目标点位置的确定可以参照当标准特征点在模板表面时进行处理,然后根据目标点位置计算待处理图像的目标特征点,继续以下肢骨为例,将目标点拟合成一个球形,其球形的球心即为待处理骨骼图像的骨骼特征点。
具体地,结合图15所示,图15为一个实施例中非表面骨骼特征点提取原理图示意图,左边的图为标准骨骼模板,右边的图为配准后的标准骨骼模板,标准骨骼模板通过非刚性配准得到配准后的标准骨骼模板。以股骨头中心点C为例,在标准骨骼模板上选择N个附近表面的点P
1…P
N作为关联点,P
1…P
N这N个点可以拟合出球面的球心点C。在确定关联点之后,使用非刚性配准将标准骨骼模板与待处理骨骼图像进行配准,通过配准将这N个关联点映射到待处理骨骼图像上,得到相应的目标点P
1’…P
N’,则待处理骨骼图像中的股骨头中心可通过P
1’…P
N’拟合出的球心位置获得。图15中的C’即为待处理骨骼图像中的股骨头中心。
在上述实施例中,可以通过选取标准特征点附近表面的特征点作为关联点并通过配准后的关联点获得待处理图像不在表面的标准特征点,解决了通过表面配准难以得到在模板内部特征点的难题。
在一个实施例中,待处理图像为待处理骨骼图像,标准特征点为骨骼特征点,骨骼特征点包括股骨特征点和胫骨特征点中的至少一个。
具体地,待处理骨骼图像为由医学成像设备对患者骨骼采集的三维影像进行三维重建得到的,标准特征点为预先生成的标准骨骼模板上的解剖特征点,仍以下肢骨为例,继续结合图6,其中股骨外侧髁远端点5、股骨内侧髁远端点6、股骨外侧髁后端点7和股骨内侧髁后端点8为股骨侧的特征点;胫骨平台外侧9和胫骨平台内侧10为胫骨侧的特征点且为膝关节置换时截骨量测量的参考基准点。
在一个实施例中,骨骼数据处理方法包括:获取待处理骨骼图像;根据上述任意一项实施例的目标征提取的方法对待处理骨骼图像进行处理得到骨骼特征点;根据预设规则对骨骼特征点进行处理。
具体地,结合图16所示,图16为一个实施例中骨骼数据处理方法的流程示意图,终 端首先获取待处理骨骼图像,其中待处理骨骼图像为由医学成像设备对患者骨骼采集的三维影像并进行三维重建得到的。其中,根据上述任意一项实施例的目标征提取的方法对待处理骨骼图像进行处理得到骨骼特征点的步骤包括:终端在获取患者骨骼图像之后,查询与患者骨骼图像对应的标准骨骼模板并获取标准骨骼模板与标准骨骼特征点的关系,其中标准骨骼模板可以是包括人体任意一个部位的骨骼,其生成方式可以按照上述模板的生成方式生成,标准骨骼特征点可以是由医生从标准骨骼模板上手动选取的;终端将标准骨骼模板与患者骨骼图像进行配准得到配准后标准骨骼模板即变形后的标准骨骼模板,其配准方式可以但不局限于非刚性配准算法;根据标准特征点在模板中的位置,结合变形后的标准骨骼模板以确定患者骨骼图像与标准骨骼特征点对应的骨骼特征点;最后,终端根据预设规则对获得的骨骼特征点进行处理,得到优化后的骨骼特征点,从而提高骨骼特征点的位置精度。
在上述实施例中,通过标准骨骼模板与患者骨骼图像进行配准,即可将标准骨骼模板上的标准骨骼特征点映射至患者骨骼,实现患者骨骼特征点位置的自动提取。
在一个实施例中,根据预设规则对骨骼特征点进行处理,包括:根据骨骼特征点计算得到股骨机械轴线、股骨通髁线以及胫骨机械轴线中的至少一个。
具体地,继续结合图6,根据髋关节中心1和股骨髁间凹4可以确定股骨机械轴;,根据股骨外侧髁2和股骨内侧髁3可以确定股骨通髁线;根据胫骨棘11和踝中点15可以确定胫骨机械轴线。
在上述实施例中,通过骨骼特征点可以计算骨骼特征点相应的下肢生理轴线,这些生理轴线可以进一步确定关节假体的摆放角度。
在一个实施例中,根据预设规则对骨骼特征点进行处理的步骤还包括:根据股骨机械轴线、股骨通髁线以及胫骨机械轴线计算得到关节假体的摆放角度。
在一个实施例中,根据预设规则对骨骼特征点进行处理的步骤还包括:根据预设规则对骨骼特征点进行优化。
具体地,对于一些几何意义明显的特征点,通过上述任意一个实施例得到的待处理骨骼特征点可能不是十分准确,因此有必要对得到的骨骼特征点做更进一步的优化,以提高骨骼特征点的位置精度,例如可以将骨骼特征点投影至相应的生理轴线上,并在生理轴线上选择在一定范围内的投影点作为待处理图像中优化后的骨骼特征点。
具体地,结合图17所示,图17为一个实施例中特征点位置优化的示意图,其中以股骨的远端切点为例,在进行配准得到相应的骨骼特征点位置后,按照定义将骨骼特征点投影至股骨机械轴线上并在一定范围内选择在股骨机械轴线上投影为最远端的点为优化后的骨骼特征点,此时两远端切点即优化后的骨骼特征点的连线与股骨相切。
在上述实施例中,通过对一些几何意义明显的特征点进行优化,可以在待处理图像中得到更准确的骨骼特征点位置。
应该理解的是,虽然图2、图4和图13的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且图2、图4和图13中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图18所示,提供了一种目标特征点提取装置,包括:数据获取模块100、模板查询模块200、配准模块300和目标提取模块400,其中:
数据获取模块100,用于获取待处理图像。
模板查询模块200,用于获取与待处理图像对应的模板,并获取模板与目标特征点的位置关系;其中模板为基于样本图像生成的图像,且标准特征点位于模板中。
配准模块300,用于将模板与待处理图像进行配准。
目标提取模块400,用于根据模板与目标特征点的位置关系,结合配准后的模板与待处理图像的位置关系,确定待处理图像中与标准特征点对应的目标特征点。
在其中一个实施例中,上述目标特征点提取装置还可以包括:
样本获取模板,用于获取若干样本图像。
样本配准模块,用于从若干样本图像中选取初始模板,并将初始模板与剩余的样本图像分别进行配准得到配准图像。
统计图像计算模块,用于计算配准图像对应的统计图像。
相似度判断模块,用于当统计图像和初始模板的相似度满足要求时,将统计图像作为模板;否则,将统计图像作为新的初始模板,并返回将初始模板与剩余的样本图像分别进行配准得到配准图像的步骤,直至统计图像和初始模板的相似度满足要求。
在一个实施例中,上述统计图像计算模块可以包括:
位置获取单元,用于获取各配准图像中对应点的初始位置。
统计图像生成单元,用于计算对应点各的各初始位置的平均值作为对应点的目标位置,并根据对应点的目标位置生成统计图像。
在一个实施例中,上述目标特征点提取装置还可以包括:
指令获取模块,用于接收针对模板的标准特征点配置指令。
特征获取模块,用于根据标准特征点配置指令在模板中配置对应的标准特征点。
在一个实施例中,上述目标特征点提取装置还可以包括:
对应点距离计算模块,用于计算统计图像与初始模板中对应点的距离。
相似度计算模块,用于根据所有对应点的距离计算得到统计图像与初始模板的相似度。
在一个实施例中,上述目标特征点提取装置还可以包括:
第一预处理模块,用于对待处理图像进行预处理,预处理包括表面点云提取单元、点云降采样单元、归一化单元中的至少一个。
第二预处理模块,用于对样本图像进行预处理,预处理包括表面点云提取单元、点云降采样单元、归一化单元中的至少一个。
表面点云提取单元,用于提取待处理图像与样本图像中所有网格的顶点,得到表面点云。
点云降采样单元,用于将待处理图像划分为至少一个处理区域,并将处理区域中到待处理区域中心距离最近的点采样为处理区域的采样点。。
归一化单元,用于将待处理图像中的点对齐到同一坐标空间。
在一个实施例中,上述配准模块300还可以包括:
配准函数获取单元,用于获取配准函数,并初始化配准函数。
配准函数优化单元,用于将待处理图像和模板输入至配准函数中以对配准函数中参数进行优化。
配准函数判断单元,用于当配准函数的参数在优化后与优化前的变化量小于预设标准时,判定模板与待处理图像完成配准;否则继续将待处理图像和模板输入至参数优化后的配准函数中以对配准函数中参数进行优化。
在一个实施例中,上述目标提取模块400还包括:
法向量获取单元,用于当标准特征点在模板表面时,获取与待处理图像配准后模板中的标准特征点的法向量。
距离计算单元,用于当法向量与配准后的待处理骨骼图像存在交点,则计算交点与标准特征点的距离。
第一骨骼特征确定单元,用于当交点与标准特征点的距离小于预设距离时,则将交点作为目标特征点。
第二骨骼特征确定单元,用于当不存在交点或者交点与标准特征点的距离大于预设 距离,则从待处理图像中选取与配准后模板中的标准特征点最近的点作为目标特征点。
在一个实施例中,上述目标提取模块400还包括:
关联点获取单元,用于当标准特征点不在标准板表面时,根据标准特征点从模板表面选取预设数量的点作为关联点。
目标点获取单元:用于根据模板与待处理图像的配准关系,确定关联点在配准图像中的目标点。
第三骨骼特征确定单元,用于根据目标点计算待处理图像的目标特征点。
在一个实施例中,上述目标特征点提取装置还包括:
标准骨骼特征获取模块,用于获取股骨特征点和胫骨特征点中的至少一个。
在一个实施例中,如图19所示,提供了一种骨骼数据处理装置,包括:骨骼图像获取模块500、骨骼特征提取模块600以及骨骼特征处理模块700,其中:
骨骼图像获取模块500,用于获取待处理骨骼图像。
骨骼特征提取模块600,用于根据上述任一实施例中目标特征点提取装置对待处理骨骼图像进行处理得到骨骼特征点。
骨骼特征处理模块700,根据预设规则对骨骼特征点进行处理。
在一个实施例中,上述骨骼特征处理模块700还包括:
轴线计算单元,用于根据骨骼特征点计算得到股骨机械轴线、股骨通髁线以及胫骨机械轴线中的至少一个。
在一个实施例中,上述骨骼特征提取模块600还包括:
骨骼特征优化单元,用于根据预设规则对骨骼特征点进行优化。
关于目标特征点提取装置的具体限定可以参见上文中对于目标特征点提取方法的限定,在此不再赘述。上述目标特征点提取装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图20所示。该计算机设备包括通过系统总线连接的处理器、存储器、通信接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种目标特征点提取方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图20中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读 取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
Claims (17)
- 一种目标特征点提取方法,包括:获取待处理图像;获取与所述待处理图像对应的模板,并获取所述模板与标准特征点的位置关系;其中所述模板为基于样本图像生成的图像,且所述标准特征点位于所述模板中;将所述模板与所述待处理图像进行配准;根据所述模板与所述标准特征点的位置关系,结合配准后的所述模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点。
- 根据权利要求1所述的目标特征点提取方法,其中,在所述获取与所述待处理图像对应的模板之前,所述方法还包括生成所述模板;其中所述生成所述模板包括:获取若干样本图像;从若干所述样本图像中选取初始模板,并将所述初始模板与剩余的所述样本图像分别进行配准得到配准图像;计算所述配准图像对应的统计图像;当所述统计图像和所述初始模板的相似度满足要求时,将所述统计图像作为模板,否则,将所述统计图像作为新的初始模板,并返回将所述初始模板与剩余的所述样本图像分别进行配准得到配准图像的步骤,直至计算的所述统计图像和所述初始模板的相似度满足要求。
- 根据权利要求2所述的目标特征点提取方法,其中,所述计算所述配准图像对应的统计图像包括:获取各所述配准图像中对应点的初始位置;计算对应点的各所述初始位置的平均值作为对应点的目标位置,并根据对应点的所述目标位置生成所述统计图像。
- 根据权利要求2所述的方法,其中,在所述生成所述模板之后,所述方法还包括:接收针对所述模板的标准特征点配置指令;根据所述标准特征点配置指令在所述模板中配置对应的标准特征点。
- 根据权利要求2所述的目标特征点提取方法,其中,在所述计算所述配准图像对应的统计图像之后,还所述方法包括:计算所述统计图像与所述初始模板中对应点的距离;根据所有对应点的距离计算得到所述统计图像与所述初始模板的相似度。
- 根据权利要求2所述的目标特征点提取方法,其中,所述待处理图像与所述样本图像为三维网格点云图像;在所述将所述初始模板与剩余的所述样本图像进行配准之前,所述方法还包括:对所述待处理图像进行预处理;和/或在所述将所述模板与所述待处理图像进行配准之前,所述方法还包括:对所述样本图像进行预处理;所述预处理包括提取表面点云、点云降采样、归一化中的至少一个;其中,所述提取表面点云为提取所述待处理图像与所述样本图像中所有网格的顶点,得到表面点云;所述点云降采样为将所述待处理图像划分为至少一个处理区域,并将所述处理区域中到所述处理区域中心距离最近的点采样为所述处理区域的采样点;所述归一化为将所述待处理图像中的点对齐到同一坐标空间。
- 根据权利要求1所述的目标特征点提取方法,其中,所述将所述模板与所述待处理 图像进行配准包括:获取配准函数,并初始化所述配准函数;将所述待处理图像和所述模板输入至所述配准函数中以对所述配准函数中参数进行优化;当所述配准函数的参数在优化后与优化前的变化量小于预设标准时,判定所述模板与所述待处理图像完成配准,否则继续将所述待处理图像和所述模板输入至参数优化后的所述配准函数中以对所述配准函数中参数进行优化。
- 根据权利要求1所述的目标特征点提取方法,其中,所述根据所述模板与所述标准特征点的位置关系,结合配准后的所述模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点包括:当所述标准特征点在所述模板表面时,获取与所述待处理图像配准后所述模板中的所述标准特征点的法向量;当所述法向量与配准后的所述待处理图像存在交点时,则计算所述交点与所述标准特征点的距离;当所述交点与所述标准特征点的距离小于预设距离时,则将所述交点作为目标特征点;当不存在所述交点或者所述交点与所述标准特征点的距离大于所述预设距离,则从所述待处理图像中选取与配准后模板中的所述标准特征点最近的点作为目标特征点。
- 根据权利要求1所述的目标特征点提取方法,其中,所述根据所述模板与所述标准特征点的位置关系,结合配准后的所述模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点包括:当所述标准特征点不在所述标准板表面时,根据所述标准特征点从所述模板表面选取预设数量的点作为关联点;根据所述模板与所述待处理图像的配准关系,确定所述关联点在配准图像中的目标点;根据所述目标点计算所述待处理图像的目标特征点。
- 根据权利要求1至9中任一项所述的目标特征点提取方法,其中,所述待处理图像为待处理骨骼图像,所述标准特征点为骨骼特征点,所述骨骼特征点包括股骨特征点和胫骨特征点中的至少一个。
- 一种骨骼数据处理方法,包括:获取待处理骨骼图像;根据权利要求1至10中任一项所述的目标特征点提取方法对所述待处理骨骼图像进行处理得到骨骼特征点;根据预设规则对所述骨骼特征点进行处理。
- 根据权利要求11所述的骨骼数据处理方法,其中,所述根据预设规则对所述骨骼特征点进行处理包括:根据所述骨骼特征点计算得到股骨机械轴线、股骨通髁线以及胫骨机械轴线中的至少一个。
- 根据权利要求11所述的骨骼数据处理方法,其中,所述根据预设规则对所述骨骼特征点进行处理,还包括:根据预设规则对所述骨骼特征点进行优化。
- 一种目标特征点提取装置,包括:数据获取模块,用于获取待处理图像;模板查询模块,用于获取与所述待处理图像对应的模板,并获取所述模板与目标特征点的位置关系;其中所述模板为基于样本图像生成的图像,且所述标准特征点位于所述模板中;配准模块,用于将所述模板与所述待处理图像进行配准;目标提取模块,用于根据所述模板与所述目标特征点的位置关系,结合配准后的所述 模板与所述待处理图像的位置关系,确定所述待处理图像中与所述标准特征点对应的目标特征点。
- 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其中,所述处理器执行所述计算机程序时实现权利要求1至13中任一项所述的方法的步骤。
- 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1至13中任一项所述的方法的步骤。
- 一种计算机程序产品,包括计算机程序,其中,该计算机程序被处理器执行时实现权利要求1至13中任一项所述的方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111307112.8 | 2021-11-05 | ||
CN202111307112.8A CN114155376A (zh) | 2021-11-05 | 2021-11-05 | 目标特征点提取方法、装置、计算机设备和存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023078309A1 true WO2023078309A1 (zh) | 2023-05-11 |
Family
ID=80458999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/129336 WO2023078309A1 (zh) | 2021-11-05 | 2022-11-02 | 目标特征点提取方法、装置、计算机设备和存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114155376A (zh) |
WO (1) | WO2023078309A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116452755A (zh) * | 2023-06-15 | 2023-07-18 | 成就医学科技(天津)有限公司 | 一种骨骼模型构建方法、系统、介质及设备 |
CN116468729A (zh) * | 2023-06-20 | 2023-07-21 | 南昌江铃华翔汽车零部件有限公司 | 一种汽车底盘异物检测方法、系统及计算机 |
CN117218091A (zh) * | 2023-09-19 | 2023-12-12 | 徐州医科大学 | 面向骨折地图构建的骨折线提取方法 |
CN117274402A (zh) * | 2023-11-24 | 2023-12-22 | 魔视智能科技(武汉)有限公司 | 相机外参的标定方法、装置、计算机设备及存储介质 |
CN117911474A (zh) * | 2024-03-20 | 2024-04-19 | 中南大学 | 一种在线瓦片地图渐进式动态配准方法、系统及装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114155376A (zh) * | 2021-11-05 | 2022-03-08 | 苏州微创畅行机器人有限公司 | 目标特征点提取方法、装置、计算机设备和存储介质 |
CN115100258B (zh) * | 2022-08-29 | 2023-02-07 | 杭州三坛医疗科技有限公司 | 一种髋关节图像配准方法、装置、设备以及存储介质 |
CN116091643B (zh) * | 2022-12-28 | 2024-06-14 | 群滨智造科技(苏州)有限公司 | 鞋面底部工艺轨迹的生成方法、装置、设备及介质 |
CN117612676B (zh) * | 2023-11-08 | 2024-06-07 | 中国人民解放军总医院第四医学中心 | 一种实现人体解剖特征参数批量化提取的方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109754414A (zh) * | 2018-12-27 | 2019-05-14 | 上海商汤智能科技有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN110930443A (zh) * | 2019-11-27 | 2020-03-27 | 中国科学院深圳先进技术研究院 | 图像配准方法、装置及终端设备 |
CN112950684A (zh) * | 2021-03-02 | 2021-06-11 | 武汉联影智融医疗科技有限公司 | 基于表面配准的目标特征提取方法、装置、设备和介质 |
CN114155376A (zh) * | 2021-11-05 | 2022-03-08 | 苏州微创畅行机器人有限公司 | 目标特征点提取方法、装置、计算机设备和存储介质 |
-
2021
- 2021-11-05 CN CN202111307112.8A patent/CN114155376A/zh active Pending
-
2022
- 2022-11-02 WO PCT/CN2022/129336 patent/WO2023078309A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109754414A (zh) * | 2018-12-27 | 2019-05-14 | 上海商汤智能科技有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN110930443A (zh) * | 2019-11-27 | 2020-03-27 | 中国科学院深圳先进技术研究院 | 图像配准方法、装置及终端设备 |
CN112950684A (zh) * | 2021-03-02 | 2021-06-11 | 武汉联影智融医疗科技有限公司 | 基于表面配准的目标特征提取方法、装置、设备和介质 |
CN114155376A (zh) * | 2021-11-05 | 2022-03-08 | 苏州微创畅行机器人有限公司 | 目标特征点提取方法、装置、计算机设备和存储介质 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116452755A (zh) * | 2023-06-15 | 2023-07-18 | 成就医学科技(天津)有限公司 | 一种骨骼模型构建方法、系统、介质及设备 |
CN116452755B (zh) * | 2023-06-15 | 2023-09-22 | 成就医学科技(天津)有限公司 | 一种骨骼模型构建方法、系统、介质及设备 |
CN116468729A (zh) * | 2023-06-20 | 2023-07-21 | 南昌江铃华翔汽车零部件有限公司 | 一种汽车底盘异物检测方法、系统及计算机 |
CN116468729B (zh) * | 2023-06-20 | 2023-09-12 | 南昌江铃华翔汽车零部件有限公司 | 一种汽车底盘异物检测方法、系统及计算机 |
CN117218091A (zh) * | 2023-09-19 | 2023-12-12 | 徐州医科大学 | 面向骨折地图构建的骨折线提取方法 |
CN117218091B (zh) * | 2023-09-19 | 2024-03-29 | 徐州医科大学 | 面向骨折地图构建的骨折线提取方法 |
CN117274402A (zh) * | 2023-11-24 | 2023-12-22 | 魔视智能科技(武汉)有限公司 | 相机外参的标定方法、装置、计算机设备及存储介质 |
CN117274402B (zh) * | 2023-11-24 | 2024-04-19 | 魔视智能科技(武汉)有限公司 | 相机外参的标定方法、装置、计算机设备及存储介质 |
CN117911474A (zh) * | 2024-03-20 | 2024-04-19 | 中南大学 | 一种在线瓦片地图渐进式动态配准方法、系统及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN114155376A (zh) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023078309A1 (zh) | 目标特征点提取方法、装置、计算机设备和存储介质 | |
US10217217B2 (en) | Systems and methods for obtaining 3-D images from X-ray information | |
US20210012492A1 (en) | Systems and methods for obtaining 3-d images from x-ray information for deformed elongate bones | |
WO2022037696A1 (zh) | 基于深度学习的骨骼分割方法和系统 | |
US7394946B2 (en) | Method for automatically mapping of geometric objects in digital medical images | |
EP1598778B1 (en) | Method for automatically mapping of geometric objects in digital medical images | |
JP2020175184A (ja) | 2d解剖学的画像から3d解剖学的画像を再構成するシステムおよび方法 | |
Han et al. | A nonlinear biomechanical model based registration method for aligning prone and supine MR breast images | |
US8787648B2 (en) | CT surrogate by auto-segmentation of magnetic resonance images | |
US20210007806A1 (en) | A method for obtaining 3-d deformity correction for bones | |
CN107133946A (zh) | 医学图像处理方法、装置及设备 | |
Tang et al. | 2D/3D deformable registration using a hybrid atlas | |
Eiben et al. | Biomechanically guided prone-to-supine image registration of breast MRI using an estimated reference state | |
Mishra et al. | Adaptation and applications of a realistic digital phantom based on patient lung tumor trajectories | |
CN115131487A (zh) | 医学影像处理方法、系统、计算机设备、存储介质 | |
WO2019180746A1 (en) | A method for obtaining 3-d deformity correction for bones | |
Alam et al. | Medical image registration: Classification, applications and issues | |
WO2019180747A1 (en) | Systems and methods for obtaining patient specific instrument designs | |
JP7354280B2 (ja) | 統計的形状モデリング(ssm)を使用した解剖学的対象の発病前特性化 | |
US20240185509A1 (en) | 3d reconstruction of anatomical images | |
Price et al. | A method to calculate coverage probability from uncertainties in radiotherapy via a statistical shape model | |
CN116485850A (zh) | 基于深度学习的手术导航影像的实时非刚性配准方法及系统 | |
Robb | VR assisted surgery planning | |
Wang et al. | Using optimal transport to improve spherical harmonic quantification of complex biological shapes | |
CN116848549A (zh) | 经由降维投影的图像结构的检测 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22889331 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22889331 Country of ref document: EP Kind code of ref document: A1 |