CN114299177B - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114299177B
CN114299177B CN202111604303.0A CN202111604303A CN114299177B CN 114299177 B CN114299177 B CN 114299177B CN 202111604303 A CN202111604303 A CN 202111604303A CN 114299177 B CN114299177 B CN 114299177B
Authority
CN
China
Prior art keywords
image
femoral
prosthesis
acetabulum
hip joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111604303.0A
Other languages
Chinese (zh)
Other versions
CN114299177A (en
Inventor
郭楚
刘梦星
陈一平
徐晓龙
韩一鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Mindray Technology Co Ltd
Original Assignee
Wuhan Mindray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Mindray Technology Co Ltd filed Critical Wuhan Mindray Technology Co Ltd
Priority to CN202111604303.0A priority Critical patent/CN114299177B/en
Publication of CN114299177A publication Critical patent/CN114299177A/en
Application granted granted Critical
Publication of CN114299177B publication Critical patent/CN114299177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Prostheses (AREA)

Abstract

The invention discloses an image processing method, an image processing device, electronic equipment and a storage medium. The image processing method comprises the following steps: firstly, acquiring acetabulum characteristic information and femur characteristic information in a hip joint image to be processed through a trained hip joint recognition model; then determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information; and finally, generating an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis, and generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis. The method can obtain accurate hip joint characteristic information and prosthesis model, does not need manual operation of a doctor, reduces errors caused by the manual operation of the doctor, generates the acetabulum cup prosthesis simulation image and the bone stem prosthesis simulation image in the hip joint image, and can provide more accurate reference information for the doctor to make a better surgical scheme.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of medical technology, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a storage medium.
Background
In the preoperative planning of the hip replacement surgery, a doctor generally uses an X-ray film to measure the anatomical key points of the hip, and then roughly estimates the model of the prosthesis, so as to assist in making a surgical plan, and the doctor needs to have very rich clinical experience.
Preoperative planning using manual planning is prone to measuring errors, takes long time, depends on the subjective experience of doctors, and the model of the prosthesis selected manually is not necessarily the most appropriate.
Therefore, physicians currently lack reliable preoperative planning support for hip replacement surgery.
Disclosure of Invention
In order to solve the problems and disadvantages of the prior art, an object of the present invention is to provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, which can provide a reliable preoperative planning support for hip replacement surgery for a doctor, and assist the doctor in making a proper surgical plan.
To achieve the above object, the present invention provides an image processing method, including:
acquiring acetabulum characteristic information and femur characteristic information in a hip joint image to be processed through the trained hip joint identification model;
determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information;
an acetabular cup prosthesis simulation image is generated in the hip image based on the target acetabular cup prosthesis and a femoral stem prosthesis simulation image is generated in the hip image based on the target femoral stem prosthesis.
Optionally, the acetabulum characteristic information includes an acetabulum center point and an acetabulum size, the femur characteristic information includes a femoral stem axis and a femoral medullary cavity contour, and the step of determining the target acetabulum cup prosthesis and the target femoral stem prosthesis according to the acetabulum characteristic information and the femur characteristic information includes:
determining a target acetabular cup prosthesis from the acetabular size;
determining the femoral eccentricity according to the central point of the acetabulum and the axis of the femoral shaft;
and determining the target femoral stem prosthesis according to the femoral eccentric distance and the contour of the femoral medullary cavity.
Optionally, the step of generating an acetabular cup prosthesis simulation image in the hip image based on the target acetabular cup prosthesis comprises:
determining two obturator foramens of the pelvis according to the hip joint image;
determining a obturator junction line according to the lower edges of the two pelvic obturator holes;
and rendering the target acetabular cup prosthesis into the hip joint image based on the obturator foramen connecting line and the acetabular central point, and generating an acetabular cup prosthesis simulation image.
Optionally, the step of rendering the target acetabular cup prosthesis into the hip image based on the obturator line and the acetabular central point, and generating the acetabular cup prosthesis simulation image, comprises:
determining the inclination direction of the acetabular cup prosthesis according to the obturator foramen connecting line;
determining the position of the central point of the acetabular cup prosthesis according to the central point of the acetabulum;
and rendering the target acetabular cup prosthesis into the hip joint image based on the inclination direction and the central point position of the target acetabular cup prosthesis, and generating an acetabular cup prosthesis simulation image.
Optionally, the step of determining a target femoral stem prosthesis from the femoral offset and the femoral medullary cavity profile comprises:
determining a femoral stem prosthesis to be matched according to the femoral eccentricity;
fitting the femoral stem prosthesis to be matched with the contour of the femoral medullary cavity to obtain the corresponding fitting degree of the femoral stem prosthesis to be matched;
and determining the femoral stem prosthesis to be matched with the highest fitting degree as the target femoral stem prosthesis.
Optionally, the step of generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis comprises:
determining the simulation position of the target femoral stem prosthesis according to the axis of the femoral shaft;
and rendering the target femoral stem prosthesis into the hip joint image according to the simulated position of the target femoral stem prosthesis to generate a femoral stem prosthesis simulated image.
Optionally, the hip joint recognition model includes a hip joint segmentation model and a feature point recognition model, and the step of obtaining the acetabular feature information and the femoral feature information in the hip joint image to be processed through the trained hip joint recognition model includes:
acquiring an acetabulum characteristic image and a femur characteristic image corresponding to a hip joint image to be processed through the trained hip joint segmentation model;
and acquiring an acetabulum central point and an acetabulum diameter in the acetabulum characteristic image, and a femoral shaft axis and a femoral medullary cavity contour in the femur characteristic image through the trained characteristic point identification model.
Optionally, the feature point recognition model includes an acetabulum feature point recognition model and a femur feature point recognition model, and the step of obtaining an acetabulum center point and an acetabulum diameter in the acetabulum feature image, and a femoral shaft axis and a femoral medullary cavity contour in the femur feature image through the trained feature point recognition model includes:
acquiring an acetabulum central point and an acetabulum diameter in the acetabulum characteristic image through the trained acetabulum characteristic point identification model; and acquiring a femoral shaft axis and a femoral medullary cavity outline in the femoral characteristic image through the trained femoral characteristic point identification model.
Optionally, before the step of obtaining the acetabular feature information and the femoral feature information in the hip image to be processed through the trained hip recognition model, the method further includes:
acquiring a plurality of hip joint sample images;
labeling the corrected bone characteristic information on a plurality of hip joint sample images,
inputting a plurality of hip joint sample images into a hip joint segmentation model to be trained to obtain corresponding acetabulum characteristic sample images and femur characteristic sample images;
and adjusting parameters in the hip joint segmentation model to be trained according to the marked bone characteristic information, the acetabulum characteristic sample image and the femur characteristic sample image.
Optionally, before the step of obtaining the acetabular feature information and the femoral feature information in the hip joint image to be processed through the trained hip joint recognition model, the method further includes:
respectively inputting the acetabulum characteristic sample image and the femur characteristic sample image into an acetabulum characteristic point identification model and a femur characteristic point identification model to be trained to obtain bone characteristic information to be verified;
and adjusting parameters in the acetabulum characteristic point identification model to be trained and the femur characteristic point identification model to be trained according to the marked bone characteristic information and the bone characteristic information to be verified.
The present invention also provides an image processing apparatus comprising:
the acquisition module is used for acquiring the acetabulum characteristic information and the femur characteristic information in the hip joint image to be processed through the trained hip joint recognition model;
the determining module is used for determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information;
the generation module is used for generating an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis and generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis.
The present invention also provides an electronic device, including a storage medium and a processor, where the storage medium stores a computer program, and the processor implements the steps of any one of the image processing methods when executing the computer program.
The present invention also provides a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the image processing method of any of the above.
Compared with the prior art, the invention has the beneficial effects that: firstly, acquiring acetabulum characteristic information and femur characteristic information in a hip joint image to be processed through a trained hip joint recognition model; then determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information; and finally, generating an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis, and generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis. The invention overcomes the defects that in the prior art, doctors need to manually measure hip joint characteristic information and manually select the prosthesis, and need clinical experience of the doctors to make an operation scheme. The method can obtain accurate hip joint characteristic information and prosthesis model, doctors do not need to manually measure the hip joint characteristic information and manually and independently select the acetabular cup prosthesis and the femoral stem prosthesis, errors caused by manual operation are reduced, the acetabular cup prosthesis simulation image and the femoral stem prosthesis simulation image are generated in the hip joint image, and more accurate reference information can be provided when the doctors make a surgical plan, so that the doctors are helped to make a more optimal surgical plan.
Drawings
In order to illustrate the embodiments or the technical solutions in the prior art more clearly, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the invention, and it is obvious for a person skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart illustrating a first step of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a second step of the image processing method according to the embodiment of the present invention;
FIG. 3 is a schematic image processing diagram of a hip segmentation model according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a third step of the image processing method according to the embodiment of the present invention;
FIG. 5 is a flowchart illustrating a fourth step of the image processing method according to the embodiment of the present invention;
FIG. 6 is a schematic view of the acetabular cup prosthesis of an embodiment of the invention shown in an oblique orientation;
FIG. 7 is a schematic illustration of femoral eccentricity in an embodiment of the present invention;
FIG. 8 is an acetabular cup prosthesis simulated image and femoral stem prosthesis simulated image of an embodiment of the invention;
FIG. 9 is a flowchart of the steps of a method of image processing according to an embodiment of the present invention;
FIG. 10 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, in the embodiments of the present invention, the terms "first", "second", and the like are used only for the purpose of distinguishing between descriptions and are not intended to indicate or imply relative importance or to implicitly indicate the number of technical features indicated. In embodiments of the present invention, "for example," example, "and" such as "are used to mean" serving as an example, instance, or illustration. Any embodiment described herein as "for example," "for example," and "such as" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the invention. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and processes are not shown in detail to avoid obscuring the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In the prior art, when a doctor preprocesses a hip joint X-ray image, the doctor firstly manually marks and measures hip joint characteristic points, then the doctor selects a proper prosthesis template in a prosthesis library according to the measured characteristic point information and by combining with operation experience, and manually places and adjusts the angle and the position of the prosthesis template in the image so as to simulate the effect after the operation and further assist the doctor to make a corresponding operation scheme. The method needs more manual measurement operations, is easy to generate errors, depends on personal subjective experiences of doctors, and is difficult for the doctors with less experiences.
The embodiment of the invention provides an image processing method which can provide reliable preoperative planning scheme support for a hip replacement operation for a doctor, assist the doctor to make a proper operation scheme and reduce errors caused by manual operation. As shown in fig. 1, the image processing method includes the steps of:
and step 100, acquiring acetabulum characteristic information and femur characteristic information in a hip joint image to be processed through the trained hip joint identification model. The hip joint image to be processed may be a hip joint X-ray image. The acetabular characteristic information includes acetabular center point and acetabular size, including acetabular radius or diameter.
In order to enable the hip joint identification model to accurately measure information such as the central point of the acetabulum, the diameter of the acetabulum, the axis of the femoral shaft, the contour of the femoral medullary cavity and the like, a reference object with known dimensions and the hip joint can be put together for shooting to obtain a hip joint image, so that accurate information of the central point of the acetabulum, the diameter of the acetabulum, the axis of the femoral shaft and the contour of the femoral medullary cavity can be calculated through calibration of the reference object.
In one embodiment, the hip recognition model comprises a hip segmentation model and a feature point recognition model. The characteristic point identification model comprises an acetabulum characteristic point identification model and a femur characteristic point identification model, and can also comprise a pelvis characteristic point identification model. And inputting the hip joint image to be processed into a hip joint segmentation model, and obtaining an acetabulum characteristic image and a femur characteristic image corresponding to the hip joint image and a pelvis characteristic image after the hip joint segmentation model is processed. Then inputting the acetabulum characteristic image into an acetabulum characteristic point identification model to obtain an acetabulum central point and an acetabulum diameter; inputting the femur characteristic image into a femur characteristic point identification model to obtain a femoral shaft axis and a femoral medullary cavity outline; and inputting the pelvis characteristic image into a pelvis characteristic point identification model to obtain pelvis characteristic information.
In an embodiment, as shown in fig. 2, step 100 may specifically include the following steps:
and step 110, acquiring an acetabulum characteristic image and a femur characteristic image corresponding to the hip joint image to be processed through the trained hip joint segmentation model. As shown in fig. 3, the hip joint image to be processed is input into the hip joint segmentation model after training, so as to obtain an acetabular feature image, a femoral feature image and a pelvic feature image.
And step 120, acquiring an acetabulum central point and an acetabulum diameter in the acetabulum characteristic image through the trained acetabulum characteristic point identification model.
And step 130, acquiring a femoral shaft axis and a femoral medullary cavity contour in the femoral characteristic image through the trained femoral characteristic point identification model.
In the above steps, the hip joint segmentation model and the feature point identification model are neural networks based on deep learning. In particular, the hip segmentation model may be a UNet network.
Step 200, determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information.
In one embodiment, step 200 specifically includes:
a target acetabular cup prosthesis is determined based on the acetabular size. For example, a suitable acetabular cup prosthesis model is selected from a library of acetabular prostheses based on acetabular size, and the acetabular cup prosthesis of that model is determined to be the target acetabular cup prosthesis. For example, if the diameter of the acetabulum is 50mm, then a size of 50mm acetabular cup prosthesis is selected in the acetabular prosthesis library.
Determining the femoral eccentricity according to the central point of the acetabulum and the axis of the femoral shaft; and determining the target femoral stem prosthesis according to the femoral eccentricity and the femoral medullary cavity contour.
Specifically, in the above steps, the femoral stem prosthesis to be matched is first determined according to the femoral eccentricity. One or more femoral stem prostheses to be matched can be selected from the femoral stem prostheses according to the femoral offset. As shown in fig. 7, the femoral offset h is a vertical distance from the acetabulum center point O to the femoral shaft axis L3, and is also a vertical distance from the femoral head center point (coinciding with the acetabulum center point O) to the femoral shaft axis L3. The included angle b between the axis L4 of the acetabulum of the human body (which is coincident with the axis L2 of the acetabular cup prosthesis) and the axis L3 of the femoral shaft is generally the same or slightly different, so that the size of the femoral shaft and further the size of the femoral stem prosthesis can be determined through the femoral eccentricity h. And then fitting the femoral stem prosthesis to be matched with the contour of the femoral medullary cavity to obtain the corresponding fitting degree of the femoral stem prosthesis to be matched. And finally, determining the femoral stem prosthesis to be matched with the highest fitting degree as the target femoral stem prosthesis. In this way, a target femoral stem prosthesis is obtained that can be precisely matched to the target acetabular cup prosthesis.
And 300, generating an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis and generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis.
In one embodiment, as shown in fig. 4, the step of generating an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis may specifically include:
two obturator pelvic holes are determined 310 from the hip images. Specifically, two closed pelvic pores can be identified in the pelvic bone feature image by the feature point identification model.
At step 320, a line connecting the obturator foramen is determined based on the inferior border of the two pelvic obturator foramen. Specifically, the lower edge points of two pelvic obturator foramen are determined, and then the two lower edge points are connected to form an obturator foramen connecting line.
Step 330, rendering the target acetabular cup prosthesis into the hip image based on the obturator foramen connecting line and the acetabular central point, and generating an acetabular cup prosthesis simulation image.
As shown in fig. 5, step 330 may further specifically include:
in step 331, the tilt direction of the acetabular cup prosthesis is determined from the obturator foramen line. Specifically, as shown in fig. 6, the inclination direction may be a direction of an angle a formed by an axis L2 of the acetabular cup prosthesis and a connecting line L1 of the obturator foramen, wherein the angle a may range from 38 ° to 42 °, and the present embodiment may preferably be 40 °.
Step 332, determining the center point position of the target acetabular cup prosthesis according to the acetabulum center point. That is, the center point of the acetabular cup prosthesis is positioned on the acetabular center point such that the center point of the target acetabular cup prosthesis coincides with the acetabular center point.
Step 333, rendering the target acetabular cup prosthesis into the hip image based on the inclination direction and the center point position of the target acetabular cup prosthesis, and generating an acetabular cup prosthesis simulation image.
The acetabular cup prosthesis simulation image corresponding to the target acetabular cup prosthesis can be obtained through the steps.
In one embodiment, the step of generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis may specifically include: the simulated position of the target femoral stem prosthesis is first determined from the femoral shaft axis. And then rendering the target femoral stem prosthesis into the hip joint image according to the simulated position of the target femoral stem prosthesis to generate a femoral stem prosthesis simulated image, and specifically rendering the target femoral stem prosthesis into the hip joint image in a manner that the femoral stem axis is aligned with the femoral stem prosthesis.
In the embodiment, the acetabulum cup prosthesis simulation image and the femoral stem prosthesis simulation image are generated in the hip joint image, so that more accurate reference information can be provided for a doctor to make a better surgical scheme. The method of the embodiment overcomes the defects that in the prior art, doctors need to manually measure hip joint characteristic information and manually select the prosthesis, and the doctors need clinical experience to make a surgical plan. By the method, accurate hip joint characteristic information and prosthesis models can be obtained, a doctor does not need to manually measure the hip joint characteristic information and manually and independently select the acetabular cup prosthesis and the femoral stem prosthesis, and errors caused by manual operation are reduced.
In this embodiment, as shown in fig. 8, the simulation image of the target acetabular cup prosthesis and the acetabular cup prosthesis corresponding to the target acetabular cup prosthesis, and the simulation image of the target femoral stem prosthesis and the femoral stem prosthesis corresponding to the target femoral stem prosthesis can be obtained by the above method. A doctor can perform preoperative planning before hip joint operation according to the acetabular cup prosthesis simulation image and the femoral stem prosthesis simulation image, an optimal operation scheme is obtained through analysis, and the operation effect is improved. The whole process does not need manual measurement operation of doctors, and does not need the doctors to do analysis and judgment with abundant clinical experience.
In one embodiment, the hip joint segmentation model can improve the ability of the model to finely segment the target by combining dense block technology and PointRend (based on point rendering) technology on the basis of a UNet network.
As shown in fig. 3, the UNet network of the present embodiment includes 5 downsampling layers and 5 upsampling layers, and performs a feature fusion operation on feature maps output by the downsampling layers and the upsampling layers by using a hopping connection method. Each downsampling layer contains 2 convolution operation layers and 1 pooling layer. The convolution kernel size used by the convolution operation layer is 3 × 3, the step length is 1, the convolution operation layer adopts a padding-same form to fill zero to the upper part, the lower part, the left part and the right part of the feature graph after convolution, and the size of the feature graph after convolution is ensured to be unchanged. An activation layer is added after each convolution operation layer, and the activation layer adopts a ReLU activation function. The maximum pooling function is used for the pooling layer, and the convolution kernel of the pooling layer has a size of 2 x 2. And a Dropout layer is added before the pooling layer of the 5 th down-sampling layer, the dropping rate of the Dropout layer is 0.5, and the addition of the Dropout layer can effectively prevent the network model from being over-fitted and enhance the robustness. The upper sampling layer mainly comprises an upper sampling operation layer, the size of a convolution kernel adopted by the upper sampling operation layer is 2 x 2, and the size of an output characteristic diagram of the upper sampling operation layer is 2 times that of an input characteristic diagram. Meanwhile, the up-sampling layer comprises feature fusion operation, and feature graphs output by each layer of the down-sampling layer and the up-sampling layer are spliced and fused, namely jump connection is carried out. The jump connection can fuse the shallow layer image feature information and the deep layer image feature information, enriches the image feature information and is beneficial to improving the segmentation precision of the network model. The up-sampling layer also comprises a convolution operation layer, and the form of the convolution operation layer is consistent with that of the up-sampling layer. The number of convolution kernels in the convolution operation layers of the 5 upsampling layers is 512, 256, 128, 64 and 32 respectively, and the number of convolution kernels in the convolution operation layers of the 5 downsampling layers is 32, 64, 128, 256 and 512 respectively. The number of convolution kernels for the 2 convolution operations preceding the first upsampling operation is 1024.
The down-sampling layer in the network is used for extracting local and global characteristic information of the bladder ultrasonic images through convolution operation and pooling operation, outputting characteristic maps of a plurality of bladder ultrasonic images and extracting an interested region in the bladder ultrasonic images by combining the characteristic information. The 5 down-sampling operations adopt a maximum pooling function, the maximum pooling operation extracts the global features of the image, the dimension reduction is carried out on the data, and the size of the output feature map is half of that of the input feature map. The function of the up-sampling in the network is to reduce the size of the bladder ultrasonic image, and the feature maps output by the down-sampling layer and the up-sampling layer are fused, so that the diversity of image features can be increased, the segmentation precision of the network model can be improved, the diversity of the features can be increased, and the segmentation precision of the network model can be improved.
The activation function adopted by the activation layer in the network is a ReLU activation function, and each convolution operation layer in the network is connected with the ReLU activation layer. The ReLU activation layer is used for increasing the nonlinear relation between each layer in the convolutional neural network, preventing the occurrence of a network overfitting phenomenon, improving the convergence rate of the network and preventing the problem that the network cannot be trained due to gradient disappearance in back propagation.
The jump connection in the network is to splice and fuse the feature graph output by each down-sampling layer after convolution operation, activation operation and pooling operation and the output feature graph of the up-sampling layer after up-sampling operation, so that the feature information of the image is enriched, the segmentation precision of the network is improved, and the universality of the model is improved.
In one embodiment, in order to improve the processing accuracy of the hip joint recognition model in the practical application process, the hip joint recognition model needs to be trained and optimized. Therefore, as shown in fig. 9, before step 100, the following steps are further included:
step 010, a plurality of hip joint sample images are acquired.
And step 020, marking the corrected bone characteristic information on a plurality of hip joint sample images. The correct bone characteristic information can be marked on each hip joint sample graph in a manual marking mode. The bone characteristic information comprises bone characteristic information such as acetabulum, femur and pelvis.
And 030, inputting the plurality of hip joint sample images into a hip joint segmentation model to be trained to obtain corresponding acetabulum characteristic sample images and femur characteristic sample images. Meanwhile, a pelvis characteristic sample image can be obtained.
And step 040, adjusting parameters in the hip joint segmentation model to be trained according to the labeled bone feature information, the acetabulum feature sample image and the femur feature sample image. Specifically, the acetabulum characteristic sample image, the femur characteristic sample image and the pelvis characteristic sample image are used for adjusting parameters in the hip joint segmentation model to be trained, so that the difference value between the bone characteristic information obtained from the hip joint segmentation model and the marked bone characteristic information is smaller and smaller until the difference value is smaller than a preset value, and the hip joint segmentation model training is finished.
In one embodiment, before step 100, the method further comprises the following steps:
firstly, inputting the acetabulum characteristic sample image and the femur characteristic sample image into the acetabulum characteristic point identification model to be trained and the femur characteristic point identification model respectively to obtain the bone characteristic information to be verified.
And then, adjusting parameters in the acetabulum characteristic point identification model to be trained and the femur characteristic point identification model according to the marked bone characteristic information and the bone characteristic information to be verified. Specifically, parameters of the acetabulum feature point identification model to be trained and parameters of the femur feature point identification model are adjusted, so that differences between the bone feature information obtained from the acetabulum feature point identification model and the femur feature point identification model and the marked bone feature information are smaller and smaller until the differences are smaller than a preset value, and at this time, the training of the acetabulum feature point identification model and the femur feature point identification model is completed.
Through the steps, the hip joint segmentation model and the feature point recognition model of the embodiment can be well trained and optimized, and the processing precision of the hip joint segmentation model and the feature point recognition model in application is improved.
According to the embodiment of the invention, accurate hip joint characteristic information such as a central point of an acetabulum, the diameter of the acetabulum, the axis of a femoral shaft, the contour of a femoral medullary cavity and the like can be obtained through a hip joint identification model, a target acetabulum cup prosthesis and a target acetabulum cup prosthesis are selected according to the hip joint characteristic information, and an acetabulum cup prosthesis simulation image and a bone stem prosthesis simulation image are regenerated. The doctor does not need to manually measure hip joint characteristic information, does not need to manually and independently select the acetabular cup prosthesis and the femoral stem prosthesis, reduces errors caused by manual operation, and can make a better surgical scheme by directly referring to the acetabular cup prosthesis simulation image and the femoral stem prosthesis simulation image.
An embodiment of the present invention provides an image processing apparatus, as shown in fig. 10, including an obtaining module 1, a determining module 2, and a generating module 3. Wherein:
the acquisition module 1 is used for acquiring acetabulum characteristic information and femur characteristic information in a hip joint image to be processed through a trained hip joint recognition model;
the determining module 2 is used for determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information;
the generation module 3 is used for generating an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis and generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis.
The image processing device of the embodiment of the invention adopts the image processing method provided by the embodiment, firstly, the acquisition module 1 acquires the acetabulum characteristic information and the femur characteristic information in the hip joint image to be processed through the trained hip joint recognition model; then the determining module 2 determines a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information; and finally, the generation module 3 generates an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis and generates a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis. The embodiment overcomes the defects that in the prior art, a doctor needs to manually measure hip joint characteristic information and manually select a prosthesis, and needs clinical experience of the doctor to make an operation scheme. The method of the embodiment can obtain accurate hip joint characteristic information and prosthesis model, a doctor does not need to manually measure the hip joint characteristic information and manually and independently select the acetabular cup prosthesis and the femoral stem prosthesis, errors caused by manual operation are reduced, the acetabular cup prosthesis simulation image and the femoral stem prosthesis simulation image are generated in the hip joint image, and more accurate reference information can be provided for the doctor to make a better surgical scheme.
An embodiment of the present invention also provides an electronic device, as shown in fig. 11, including a processor and a computer-readable storage medium, where the storage medium stores a computer instruction or a computer program.
The electronic equipment of the embodiment of the invention can realize the following method:
firstly, acquiring acetabulum characteristic information and femur characteristic information in a hip joint image to be processed through a trained hip joint recognition model; then determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information; and finally, generating an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis, and generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis.
The electronic equipment of the embodiment overcomes the defects that in the prior art, doctors need to manually measure hip joint characteristic information, manually select the prosthesis and need clinical experience of the doctors to make a surgical plan. The method of the embodiment can obtain accurate hip joint characteristic information and prosthesis model, a doctor does not need to manually measure the hip joint characteristic information and manually and independently select the acetabular cup prosthesis and the femoral stem prosthesis, errors caused by manual operation are reduced, the acetabular cup prosthesis simulation image and the femoral stem prosthesis simulation image are generated in the hip joint image, and more accurate reference information can be provided for the doctor to make a better surgical scheme.
It will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by instructions (computer programs) which may be stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the instructions (computer programs). To this end, the storage medium of the hardware device according to the embodiment of the present invention stores a plurality of instructions, and the instructions can be loaded by the processor to execute the steps of any embodiment of the image processing method according to the embodiment of the present invention.
The storage medium and the processor are electrically connected, directly or indirectly, to enable transmission or interaction of data. For example, the elements may be electrically connected to each other via one or more communication buses or signal lines, such as via a bus. The storage medium stores computer-executable instructions for implementing the data access control method, and includes at least one software functional module which can be stored in the storage medium in the form of software or firmware, and the processor executes various functional applications and data processing by running the software programs and modules stored in the storage medium. The storage medium may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a programmable read-only memory (PROM), an erasable read-only memory (EPROM), an electrically erasable read-only memory (EEPROM), and the like. The storage medium is used for storing programs, and the processor executes the programs after receiving the execution instructions. Further, the software programs and modules within the storage media described above may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components. The processor may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps, and logic flow diagrams disclosed in this embodiment may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Since the instructions (computer programs) stored in the storage medium can execute the steps in any image processing method provided by the embodiment of the present invention, the advantageous effects that can be achieved by any image processing method provided by the embodiment of the present invention can be achieved. The instructions or computer programs in the storage medium of the embodiments of the present invention may implement the following steps:
firstly, acquiring acetabulum characteristic information and femur characteristic information in a hip joint image to be processed through a trained hip joint recognition model; then determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular characteristic information and the femoral characteristic information; and finally, generating an acetabular cup prosthesis simulation image in the hip joint image based on the target acetabular cup prosthesis, and generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis.
The electronic equipment of the embodiment overcomes the defects that in the prior art, doctors need to manually measure characteristic information of hip joints, manually select the type of the prosthesis and need clinical experience of the doctors to make an operation scheme. The method of the embodiment can obtain accurate hip joint characteristic information and prosthesis model, a doctor does not need to manually measure the hip joint characteristic information and manually and independently select the acetabular cup prosthesis and the femoral stem prosthesis, errors caused by manual operation are reduced, the acetabular cup prosthesis simulation image and the femoral stem prosthesis simulation image are generated in the hip joint image, and more accurate reference information can be provided for the doctor to make a better surgical scheme.
While the invention has been described with reference to specific preferred embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. An image processing method, comprising:
acquiring acetabulum characteristic information and femur characteristic information in a hip joint image to be processed through the trained hip joint identification model; the acetabulum characteristic information comprises an acetabulum central point and acetabulum size, and the femur characteristic information comprises a femur shaft axis and a femur medullary cavity outline;
determining a target acetabular cup prosthesis and a target femoral stem prosthesis according to the acetabular feature information and the femoral feature information;
generating an acetabular cup prosthesis simulation image in the hip image based on the target acetabular cup prosthesis, specifically comprising: determining two pelvis closed holes according to the hip joint image, determining a closed hole connecting line according to the lower edges of the two pelvis closed holes, rendering the target acetabular cup prosthesis into the hip joint image based on the closed hole connecting line and the acetabular central point, and generating an acetabular cup prosthesis simulation image; and generating a femoral stem prosthesis simulation image in the hip joint image based on the target femoral stem prosthesis, specifically comprising: and determining the simulation position of the target femoral stem prosthesis according to the femoral shaft axis, and rendering the target femoral stem prosthesis into the hip joint image according to the simulation position of the target femoral stem prosthesis to generate a femoral stem prosthesis simulation image.
2. The image processing method of claim 1, wherein the step of determining a target acetabular cup prosthesis and a target femoral stem prosthesis from the acetabular feature information and the femoral feature information comprises:
determining a target acetabular cup prosthesis from the acetabular size;
determining femoral eccentricity according to the acetabulum central point and the femoral shaft axis;
and determining a target femoral stem prosthesis according to the femoral eccentric distance and the femoral medullary cavity contour.
3. The method of claim 1, wherein the step of rendering the target acetabular cup prosthesis into the hip image based on the obturator foramen line and the acetabular center point, generating an acetabular cup prosthesis simulation image, comprises:
determining a tilt direction of the acetabular cup prosthesis from the obturator foramen line;
determining a center point position of the acetabular cup prosthesis from the acetabular center point;
rendering the target acetabular cup prosthesis into the hip image based on the tilt direction and the center point position of the target acetabular cup prosthesis, generating an acetabular cup prosthesis simulation image.
4. The image processing method according to claim 2, wherein the step of determining a target femoral stem prosthesis based on the femoral eccentricity and the femoral medullary cavity profile comprises:
determining a femoral stem prosthesis to be matched according to the femoral eccentricity;
fitting the femoral stem prosthesis to be matched with the contour of the femoral medullary cavity to obtain the corresponding fitting degree of the femoral stem prosthesis to be matched;
and determining the femoral stem prosthesis to be matched with the highest fitting degree as the target femoral stem prosthesis.
5. The image processing method according to any one of claims 1 to 4, wherein the hip recognition model includes a hip segmentation model and a feature point recognition model, and the step of obtaining the acetabular feature information and the femoral feature information in the hip image to be processed through the trained hip recognition model includes:
acquiring an acetabulum characteristic image and a femur characteristic image corresponding to a hip joint image to be processed through the trained hip joint segmentation model;
and acquiring an acetabulum central point and acetabulum size in the acetabulum characteristic image, and a femoral shaft axis and a femoral medullary cavity contour in the femoral characteristic image through the trained characteristic point identification model.
6. The image processing method according to claim 5, wherein the feature point recognition model includes an acetabulum feature point recognition model and a femur feature point recognition model, and the step of obtaining the acetabulum center point and the acetabulum size in the acetabulum feature image and the femoral shaft axis and the femoral medullary cavity contour in the femur feature image through the trained feature point recognition model comprises:
acquiring an acetabulum central point and acetabulum size in the acetabulum characteristic image through the trained acetabulum characteristic point identification model; and acquiring the femoral shaft axis and the contour of the femoral medullary cavity in the femoral characteristic image through the trained femoral characteristic point identification model.
7. The image processing method according to claim 6, further comprising, before the step of obtaining the acetabulum feature information and the femur feature information in the hip joint image to be processed by the trained hip joint recognition model:
acquiring a plurality of hip joint sample images;
labeling the corrected bone feature information on the plurality of hip joint sample images,
inputting the plurality of hip joint sample images into a hip joint segmentation model to be trained to obtain corresponding acetabulum characteristic sample images and femur characteristic sample images;
and adjusting parameters in the hip joint segmentation model to be trained according to the marked bone characteristic information, the acetabulum characteristic sample image and the femur characteristic sample image.
8. The image processing method according to claim 7, further comprising, before the step of obtaining acetabular feature information and femoral feature information in the hip image to be processed by the trained hip recognition model, the steps of:
inputting the acetabulum characteristic sample image and the femur characteristic sample image into the acetabulum characteristic point identification model to be trained and the femur characteristic point identification model respectively to obtain bone characteristic information to be verified;
and adjusting parameters in the acetabulum characteristic point identification model to be trained and the femur characteristic point identification model according to the marked bone characteristic information and the bone characteristic information to be verified.
9. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring the acetabulum characteristic information and the femur characteristic information in the hip joint image to be processed through the trained hip joint recognition model; the acetabulum characteristic information comprises an acetabulum central point and acetabulum size, and the femur characteristic information comprises a femur shaft axis and a femur medullary cavity outline;
a determination module for determining a target acetabular cup prosthesis and a target femoral stem prosthesis from the acetabular characteristic information and the femoral characteristic information;
a generating module for generating an acetabular cup prosthesis simulation image in the hip image based on the target acetabular cup prosthesis, in particular for: determining two pelvic obturator holes according to the hip joint image, determining an obturator hole connecting line according to the lower edges of the two pelvic obturator holes, rendering the target acetabular cup prosthesis into the hip joint image based on the obturator hole connecting line and the acetabular central point, and generating an acetabular cup prosthesis simulation image; and generating a femoral stem prosthesis simulated image in the hip joint image based on the target femoral stem prosthesis, specifically for: and determining the simulation position of the target femoral stem prosthesis according to the femoral shaft axis, and rendering the target femoral stem prosthesis into the hip joint image according to the simulation position of the target femoral stem prosthesis to generate a femoral stem prosthesis simulation image.
10. An electronic device comprising a storage medium and a processor, the storage medium storing a computer program, wherein the processor implements the steps of the image processing method according to any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 8.
CN202111604303.0A 2021-12-24 2021-12-24 Image processing method, image processing device, electronic equipment and storage medium Active CN114299177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111604303.0A CN114299177B (en) 2021-12-24 2021-12-24 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111604303.0A CN114299177B (en) 2021-12-24 2021-12-24 Image processing method, image processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114299177A CN114299177A (en) 2022-04-08
CN114299177B true CN114299177B (en) 2022-09-09

Family

ID=80968612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111604303.0A Active CN114299177B (en) 2021-12-24 2021-12-24 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114299177B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118121299A (en) * 2024-03-26 2024-06-04 北京和华瑞博医疗科技有限公司 Data processing method, apparatus, device, medium, and program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179350A (en) * 2020-02-13 2020-05-19 张逸凌 Hip joint image processing method based on deep learning and computing equipment
CN113724319A (en) * 2021-08-31 2021-11-30 瓴域影诺(北京)科技有限公司 Artificial hip joint prosthesis model matching method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3062297B1 (en) * 2017-02-01 2022-07-15 Laurent Cazal METHOD AND DEVICE FOR ASSISTING THE PLACEMENT OF A PROSTHESIS, PARTICULARLY OF THE HIP, BY A SURGEON FOLLOWING DIFFERENT SURGICAL PROTOCOLS
CN111223146B (en) * 2020-02-13 2021-05-04 张逸凌 Processing method and computing device for hip joint image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179350A (en) * 2020-02-13 2020-05-19 张逸凌 Hip joint image processing method based on deep learning and computing equipment
CN113724319A (en) * 2021-08-31 2021-11-30 瓴域影诺(北京)科技有限公司 Artificial hip joint prosthesis model matching method and system

Also Published As

Publication number Publication date
CN114299177A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
US9408617B2 (en) Method for orienting an acetabular cup and instruments for use therewith
CN111888059B (en) Full hip joint image processing method and device based on deep learning and X-ray
KR102627487B1 (en) A method of generating a trauma plate for a particular bone
US9514533B2 (en) Method for determining bone resection on a deformed bone surface from few parameters
US20060204067A1 (en) Determining shaft and femur neck axes and three-dimensional reconstruction
CN114259330B (en) Measuring method, device and measuring system for angle of acetabular cup prosthesis
US10149724B2 (en) Accurate radiographic calibration using multiple images
CN116433477B (en) Pelvis registration method, device, storage medium and electronic equipment
US10004564B1 (en) Accurate radiographic calibration using multiple images
CN115844531A (en) Hip replacement surgery navigation system
CN114299177B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113077498A (en) Pelvis registration method, pelvis registration device and pelvis registration system
CN117618168B (en) Method and device for determining implantation angle of acetabular cup prosthesis and storage medium
CN114642444A (en) Oral implantation precision evaluation method and system and terminal equipment
CN112826641A (en) Guide plate design method for total hip replacement and related equipment
CN113077499B (en) Pelvis registration method, pelvis registration device, and pelvis registration system
CN117152126B (en) Knee joint state determination device, storage medium, and electronic device
CN113545848A (en) Registration method and registration device of navigation guide plate
CN113891691A (en) Automatic planning of shoulder stability enhancement surgery
US11887306B2 (en) System and method for intraoperatively determining image alignment
CN115252233A (en) Deep learning-based automatic planning method for acetabular cup in total hip replacement
CN114209476A (en) Femoral neck measuring method, system, equipment and medium based on optical positioning
CN113066110A (en) Method and device for selecting marking points in pelvis registration
Kotcheff et al. Shape model analysis of THR radiographs
CN114469341A (en) Acetabulum registration method based on hip joint replacement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant