CN113920128B - Knee joint femur tibia segmentation method and device - Google Patents

Knee joint femur tibia segmentation method and device Download PDF

Info

Publication number
CN113920128B
CN113920128B CN202111023781.2A CN202111023781A CN113920128B CN 113920128 B CN113920128 B CN 113920128B CN 202111023781 A CN202111023781 A CN 202111023781A CN 113920128 B CN113920128 B CN 113920128B
Authority
CN
China
Prior art keywords
image data
segmentation
bone
knee joint
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111023781.2A
Other languages
Chinese (zh)
Other versions
CN113920128A (en
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhang Yiling
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Priority to CN202111023781.2A priority Critical patent/CN113920128B/en
Publication of CN113920128A publication Critical patent/CN113920128A/en
Application granted granted Critical
Publication of CN113920128B publication Critical patent/CN113920128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The application discloses a knee joint femur tibia segmentation method and device. The method comprises the following steps: the method comprises the steps of obtaining a three-dimensional medical image of a knee joint after segmenting obtained medical image data to be processed, obtaining a three-dimensional medical image of the knee joint after the three-dimensional medical image of the knee joint is corroded by corroding the three-dimensional medical image of the knee joint, wherein the knee joint is divided into a femur and a tibia, the knee joint is corroded, the three-dimensional medical image of the knee joint is restored, bone corroded in the corrosion process is filled in the corresponding femur or tibia, a complete femur and tibia segmentation result is obtained, the medical image data is processed by adopting a computer vision technology, the problems that a knee joint segmentation method in the prior art is unstable in segmentation effect and slow in segmentation speed are solved, and the technical effects that the segmentation effect is stable and the segmentation speed is fast are achieved.

Description

Knee joint femur tibia segmentation method and device
Technical Field
The application relates to the field of computer vision, in particular to a knee joint femur tibia segmentation method and device.
Background
The knee joint thighbone and the shinbone are two most important bone blocks of the whole knee joint, a joint prosthesis is mainly placed on the thighbone and the shinbone in a knee joint operation, accurate and rapid segmentation of the thighbone and the shinbone is the premise of accurate operation planning, segmentation of the thighbone and the shinbone is performed through a deep learning method in the prior art, but the deep learning method is complex in training process, low in operation speed and unstable in effect on some complex cases.
Therefore, the conventional knee joint segmentation method has the problems of unstable segmentation effect and low speed.
Disclosure of Invention
The main purpose of the present application is to provide a knee joint femur tibia segmentation method and device, which have solved the problems of unstable style effect and slow speed of the knee joint segmentation method in the prior art.
In order to achieve the above object, the present application proposes a data processing method for knee joint femur tibia segmentation.
In view of the above, according to a first aspect of the present application, a knee joint femur tibia segmentation method is provided, including:
acquiring medical image data to be processed;
performing image segmentation on the medical image data to obtain two-dimensional medical image data of the knee joint in the medical image to be processed;
constructing three-dimensional medical image data of the knee joint based on the two-dimensional medical image data of the knee joint;
and carrying out corrosion operation on the three-dimensional medical image data of the knee joint to obtain pre-segmentation result image data, wherein the pre-segmentation result image data comprises independent pre-segmentation femur image data and independent pre-segmentation tibia image data.
Further, performing a corrosion operation on the three-dimensional medical image data of the knee joint to obtain pre-segmentation result image data, including:
carrying out corrosion operation on the three-dimensional medical image data of the knee joint to obtain corrosion process image data;
if the image data of the corrosion process meets the preset pre-segmentation condition, obtaining the image data of the pre-segmentation result, wherein the image data of the pre-segmentation result comprises independent pre-segmentation femur image data and independent pre-segmentation tibia image data;
and if the image data of the corrosion process does not meet the preset pre-segmentation condition, performing corrosion iteration operation on the image data of the corrosion process until the image data of the corrosion process meets the preset pre-segmentation condition.
Further, if the image data of the corrosion process meets the preset pre-segmentation condition, obtaining the image data of the pre-segmentation result, including:
identifying the corrosion process image data to obtain tibia corrosion process image data and femur corrosion process image data;
extracting the maximum communication area of the tibia corrosion process image data and the femur corrosion process image data;
if the maximum connected region meets the preset pre-segmentation condition, the image data in the corrosion process meets the preset pre-segmentation condition;
and if the maximum connected region does not meet the preset pre-segmentation condition, the image data in the corrosion process does not meet the preset pre-segmentation condition.
Further, if the maximum connected region meets the preset pre-segmentation condition, the etching process image meets the preset pre-segmentation condition, including:
identifying the maximum communication area to obtain a maximum communication area highest point and a maximum communication area lowest point, wherein the maximum communication area highest point and the maximum communication area lowest point are the highest point and the lowest point in a preset direction;
identifying the corrosion process image to obtain a bone highest point and a bone lowest point, wherein the bone highest point is the highest point in the femur and the tibia of the corrosion process image, and the bone lowest point is the lowest point in the femur and the tibia of the corrosion process image;
if the highest point of the maximum communication region is not the highest point of the skeleton, or the lowest point of the maximum communication region is not the lowest point of the skeleton, the maximum communication region meets the preset pre-segmentation condition;
and if the highest point of the maximum communicated area is the highest point of the skeleton, and the lowest point of the maximum communicated area is the lowest point of the skeleton, the maximum communicated area does not meet the preset pre-segmentation condition.
Further, after the three-dimensional medical image of the knee joint is subjected to erosion operation to obtain a pre-segmentation result image, the method further comprises the following steps:
carrying out bone repair operation on the pre-segmentation result image data to obtain target segmentation image data, wherein the method comprises the following steps:
comparing the pre-segmentation result image data with the three-dimensional medical image data of the knee joint to obtain lost bone data, wherein the lost bone data is data of corroded bone;
performing expansion processing on the lost bone data to obtain expanded bone data;
calculating an overlapping area of the swelling bone data and the pre-segmentation result image data;
and calculating the overlapping area based on a bone distribution rule, and performing bone filling processing on the pre-segmentation result image data according to a calculation result to obtain the target segmentation image data.
Further, based on a bone distribution rule, calculating the overlapping area, and performing bone filling processing on the pre-segmentation result image data according to a calculation result to obtain target segmentation image data, including:
identifying the pre-segmentation result image data to obtain pre-segmentation femur image data and pre-segmentation tibia image data;
calculating the overlapping area of the swelling bone data and the pre-segmentation femur image data to obtain a first overlapping area;
calculating the overlapping area of the expanded bone data and the pre-segmentation tibia image data to obtain a second overlapping area;
comparing the first overlapping area to the second overlapping area;
if the first overlapping area is larger than the second overlapping area, filling the lost bone data into the pre-segmentation femur image data to obtain the target segmentation image data;
and if the second overlapping area is larger than the first overlapping area, filling the lost bone data into the pre-segmentation tibia image data to obtain the target segmentation image data.
Further, image segmentation is performed on the medical image data to be processed to obtain two-dimensional medical image data of a knee joint in the medical image to be processed, and the image segmentation includes:
performing image segmentation processing on the medical image data to be processed based on a preset image segmentation model to obtain two-dimensional medical image data of a knee joint in the medical image to be processed;
the preset image segmentation model is obtained based on image data set training, the image data set comprises marked positive sample image data and unmarked negative sample image data, and the positive sample image data comprises marks used for representing a target area.
According to a second aspect of the present application, there is provided a knee joint femoral tibial segmentation device comprising:
the acquisition module is used for acquiring medical image data to be processed;
the image segmentation module is used for carrying out image segmentation on the medical image data to obtain two-dimensional medical image data of the knee joint in the medical image to be processed;
a construction module that constructs three-dimensional medical image data of the knee joint based on the two-dimensional medical image data of the knee joint;
and the corrosion module is used for carrying out corrosion operation on the three-dimensional medical image data of the knee joint to obtain pre-segmentation result image data, wherein the pre-segmentation result image data comprises independent pre-segmentation femur image data and independent pre-segmentation tibia image data.
According to a third aspect of the present application, a computer-readable storage medium is provided, which stores computer instructions for causing a computer to execute the above-mentioned knee joint femur tibia segmentation method.
According to a fourth aspect of the present application, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the above-described knee femoral tibial segmentation method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the method, the three-dimensional medical image of the knee joint is obtained after the obtained medical image data to be processed is segmented, the three-dimensional medical image of the knee joint is obtained, the three-dimensional medical image of the knee joint, which is formed by dividing the femur and the tibia of the knee joint, is subjected to corrosion processing, the three-dimensional medical image of the knee joint is obtained after the knee joint is subjected to corrosion processing, the pre-segmentation result image data obtained after the corrosion processing is subjected to repair processing, bone corroded in the corrosion process is filled in the corresponding femur or tibia part, the complete segmentation result of the femur and the tibia is obtained, the medical image data are processed through computer vision, the problems that in the prior art, the knee joint segmentation method is unstable in segmentation effect and slow in segmentation speed are solved, and the technical effects that the segmentation effect is stable and the segmentation speed is high are achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, are included to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
fig. 1 is a schematic flowchart of a knee joint femur tibia segmentation method provided in the present application;
fig. 2 is a schematic flowchart of a knee joint femur tibia segmentation method provided in the present application;
FIG. 3 is an exemplary image of a non-separated femur and tibia with a separated femur of the present application;
fig. 4 is a schematic flowchart of a knee joint femur tibia segmentation method provided in the present application;
fig. 5 is a schematic structural diagram of a knee joint femur tibia segmentation apparatus provided in the present application;
fig. 6 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used in other meanings besides orientation or positional relationship, for example, the term "upper" may also be used in some cases to indicate a certain attaching or connecting relationship. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, "connected" may be a fixed connection, a detachable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The knee joint is formed by thighbone lower extreme, shin bone upper end and patella, is the biggest most complicated joint of human body, and knee joint thighbone and shin bone are two big bone pieces of whole knee joint most important, and joint prosthesis mainly places on thighbone and shin bone in knee joint operation, how can separate thighbone and shin bone in the knee joint fast accurately, is the prerequisite of accurate operation planning.
Fig. 1 is a schematic flowchart of a knee joint femur tibia segmentation method provided in the present application, and as shown in fig. 1, the method includes the following steps:
s101: acquiring medical image data to be processed;
the medical image data to be processed may be acquired by a professional medical image acquisition device, such as an X-ray projection device, a CT projection device, or the like.
S102: performing image segmentation on the medical image data to obtain two-dimensional medical image data of the knee joint in the medical image to be processed;
performing image segmentation processing on medical image data to be processed based on a preset image segmentation model to obtain two-dimensional medical image data of a knee joint in the medical image to be processed, wherein the two-dimensional medical image data of the knee joint in the medical image to be processed is an image segmentation result containing a knee joint area, and performing image segmentation on the medical image data so as to realize quick and accurate identification and obtaining of bone tissues containing the knee joint and obtain the two-dimensional medical image data of the knee joint in the medical image to be processed; and separating the bone structure and the background soft tissue in the medical image data to be processed by image segmentation to obtain knee joint bone image data.
The preset image segmentation model is obtained based on image data set training, the image data set comprises marked positive sample image data and unmarked negative sample image data, and the positive sample image data comprises marks used for representing a target area. The medical image data is rapidly segmented by labeling the training image data in the image data set, marking the knee joint bone region and repeatedly learning, training and the like.
In the application, a threshold segmentation or a segmentation neural network model can be adopted to carry out image segmentation on the medical image to obtain a bone and background soft tissue structure, and the bone is separated independently to obtain joint bone image data.
Processing the medical image by adopting threshold segmentation, thresholding medical image data, and dividing image pixel points into a plurality of classes according to a preset characteristic threshold to obtain joint and bone image data;
adopting a segmented neural network model to establish a segmented neural network model pointrend + unet: firstly, roughly segmenting a unet network serving as a backbone network, using 4 times of downsampling to learn deep features of an image in a first stage, and then performing 4 times of upsampling to restore a feature map into the image, wherein each downsampling layer comprises 2 convolutional layers and 1 pooling layer, the size of a convolutional kernel is 3 x 3, the size of a convolutional kernel in each pooling layer is 2 x 2, and the number of convolutional kernels in each convolutional layer is 128, 256 and 512; each upsampling layer comprises 1 upsampling layer and 2 convolutional layers, wherein the convolution kernel size of each convolutional layer is 3 x 2, the convolution kernel size in each upsampling layer is 2 x 2, and the number of convolution kernels in each upsampling layer is 512, 256 and 128. And after the last upsampling is finished, a dropout layer is arranged, and the dropout rate is set to be 0.7. All convolutional layers are followed by an activation function, which is the relu function.
And then, using a pointrend accurate segmentation result, selecting a group of points with the confidence coefficient of 0.5, extracting the characteristics of the selected points, calculating the characteristics of the points through Bilinear interpolation Bilinear, and judging which category the point belongs to by using a small classifier. This is in fact equivalent to a prediction with a convolution of 1 x 1, but is not calculated for points with confidence close to 1 or 0. Thereby improving the accuracy of segmentation.
In the model training process, the background pixel value of the data label is set to be 0, the femur is 1, the tibia is 2, the trained batch _ size is 6, the learning rate is set to be 1e-4, the optimizer uses an Adam optimizer, the loss function used is DICE loss, the training set is completely sent to a network for training, the size of the training batch is adjusted according to the change of the loss function in the training process, and the rough segmentation result of each part is finally obtained. After entering the pointrend module, the prediction result of the previous segmentation is upsampled by using bilinear interpolation, and then N most uncertain points, such as points with the probability close to 0.5, are selected in the denser feature map. The N points are then characterized and their labels predicted, and the process is repeated until upsampled to the desired size. For point-by-point feature representation of each selected point, point-by-point prediction is performed using a simple multi-layer perceptron, and because MLP predicts the segmentation label of each point, it can be trained using loss in the Unet coarse segmentation task. The output after training is the femoral and tibial regions.
S103: constructing three-dimensional medical image data of the knee joint based on the two-dimensional medical image data of the knee joint;
and performing three-dimensional reconstruction processing on the two-dimensional medical image data of the knee joint through a three-dimensional reconstruction technology to obtain the three-dimensional medical image data of the knee joint.
Optionally, the two-dimensional medical image data of the knee joint to be three-dimensionally reconstructed is input to a pre-trained three-dimensional reconstruction neural network, so as to obtain three-dimensional medical image data output by the three-dimensional reconstruction neural network and corresponding to the two-dimensional medical image data of the knee joint to be three-dimensionally reconstructed.
S104: carrying out corrosion operation on three-dimensional medical image data of the knee joint to obtain pre-segmentation result image data, wherein the pre-segmentation result image data comprises independent pre-segmentation femur image data and independent pre-segmentation tibia image data;
fig. 2 is a schematic flowchart of a data processing method for knee joint femur tibia segmentation provided in the present application, and as shown in fig. 2, the method includes the following steps:
s201: carrying out corrosion operation on the three-dimensional medical image data of the knee joint to obtain corrosion process image data;
the erosion operation is a computer vision technique, the pixel of the bone position in the knee joint three-dimensional medical image data is 1, the pixel of the background image is 0, after erosion treatment, the pixel of the bone position surface is changed into 0 layer by layer, for example
Figure BDA0003241447470000101
After etching operation, obtaining
Figure BDA0003241447470000102
S202: judging whether the image data in the corrosion process meets a preset pre-segmentation condition or not;
identifying the image data of the corrosion process to obtain image data of the tibia corrosion process and image data of the femur corrosion process;
extracting the maximum communication area of the image data of the tibia corrosion process and the image data of the femur corrosion process;
judging the maximum connected region based on a preset pre-segmentation condition;
if the maximum connected region meets the preset pre-segmentation condition, separating the femur and the tibia in the corrosion image to obtain independent pre-segmentation femur image data and pre-segmentation tibia image data;
if the maximum connected region does not meet the preset pre-segmentation condition, the femur and the tibia in the corrosion image are not separated, and independent pre-segmentation femur image data and pre-segmentation tibia image data cannot be obtained.
Identifying the maximum communication area to obtain the highest point and the lowest point of the maximum communication area, wherein the highest point and the lowest point of the maximum communication area are the highest point and the lowest point in a preset direction; the preset direction is the direction of a straight line when the tibia and the femur of the knee joint are in a straight line state.
Identifying the image data in the corrosion process to obtain a highest point and a lowest point of a skeleton, wherein the highest point of the skeleton is the highest point in the femur and the tibia in the image data in the corrosion process, and the lowest point of the skeleton is the lowest point in the femur and the tibia in the image data in the corrosion process;
as shown in fig. 3, an example of an image of a femur tibia that is not separated from a femur tibia that is separated.
If the highest point of the maximum communication area is not the highest point of the skeleton or the lowest point of the maximum communication area is not the lowest point of the skeleton, the femur and the tibia in the corrosion image are separated;
if the highest point of the maximum communication area is the highest point of the skeleton, and the lowest point of the maximum communication area is the lowest point of the skeleton, the femur and the tibia in the corrosion image are not separated.
S203: pre-segmentation result image data is obtained.
If the corrosion process image data meet the preset separation condition, obtaining pre-segmentation result image data, wherein the pre-segmentation result image data are corrosion process image data of the femur and the tibia which are separated;
and if the image data in the corrosion process does not meet the preset separation condition, performing corrosion iteration operation on the image data in the corrosion process until the image data in the corrosion process meets the preset separation condition.
After the etching operation is carried out, the method further comprises the following steps:
and carrying out bone repair operation on the pre-segmentation result image data to obtain target segmentation image data.
Fig. 4 is a schematic flow chart of a knee joint femur tibia segmentation method provided by the present application, and as shown in fig. 4, the method includes the following steps
S401: comparing the pre-segmentation result image data with knee joint three-dimensional medical image data to obtain lost bone data;
wherein the bone loss data is data of corroded bone; and calculating a communicated region of all corroded bones to obtain independent bone data, namely the lost bone data.
S402: carrying out expansion processing on the lost bone data to obtain expanded bone data;
and performing expansion processing on the bone loss data, adding pixel values at the edge of the image of the bone loss, expanding the pixel values of the image of the bone loss, and obtaining the expanded bone data.
S403: calculating the overlapping area of the swelling bone data and the pre-segmentation result image data;
identifying the pre-segmentation result image data to obtain pre-segmentation femur image data and pre-segmentation tibia image data;
calculating the overlapping area of the expanded bone data and the pre-segmentation femur image data to obtain a first overlapping area;
calculating the overlapping area of the expanded bone data and the pre-segmentation tibia image data to obtain a second overlapping area;
s404: and carrying out bone filling processing on the pre-segmentation result image data according to the overlapping area to obtain target segmentation image data.
Comparing the first overlapping area with the second overlapping area;
filling the lost bone data to the pre-segmentation result femur data if the first overlapping area is larger than the second overlapping area, and obtaining the target segmentation image data;
and if the second overlapping area is larger than the first overlapping area, filling the lost bone data into the tibia data of the corrosion result to obtain target segmentation image data.
For example, if the overlapping area of the femur after the erosion operation and the expanded bone loss is larger than the overlapping area of the tibia after the erosion operation and the expanded bone loss, the bone loss belongs to the femur part, the bone loss is filled in the femur after the erosion operation, and the target segmentation image data is obtained, and the obtained target segmentation image data includes the femur image data after the bone loss is filled in and the pre-segmentation tibia image data. In the process of corrosion operation, a lot of bone substances of the tibia and the femur are corroded, and after repair operation, the corroded bone substances are filled in the separated femur and tibia, so that a complete femur and tibia segmentation result is obtained, and the accuracy of the knee joint femur and tibia style is improved.
Fig. 5 is a schematic structural diagram of a knee joint femur tibia segmentation apparatus provided in the present application, and as shown in fig. 5, the apparatus includes:
an obtaining module 51, configured to obtain medical image data to be processed;
an image segmentation module 52, configured to perform image segmentation on the medical image data to obtain two-dimensional medical image data of the knee joint in the medical image to be processed;
a construction module 53 for constructing three-dimensional medical image data of the knee joint based on the two-dimensional medical image data of the knee joint;
the erosion module 54 is used for carrying out erosion operation on the three-dimensional medical image data of the knee joint to obtain pre-segmentation result image data, wherein the pre-segmentation result image data comprises independent pre-segmentation femur image data and independent pre-segmentation tibia image data;
the application provides a knee joint thighbone shin bone segmenting device still includes:
and the restoration module is used for carrying out bone restoration operation on the pre-segmentation result image data to obtain target segmentation image data.
The specific manner of executing the operations of the units in the above embodiments has been described in detail in the embodiments related to the method, and will not be elaborated herein.
In summary, in the present application, the obtained medical image data to be processed is segmented to obtain the three-dimensional medical image of the knee joint, the three-dimensional medical image of the knee joint after the erosion processing is obtained by eroding the three-dimensional medical image of the knee joint, which is separated from the femur and the tibia, and the pre-segmentation result image data obtained by erosion processing is repaired, and the bone eroded in the erosion process is filled in the corresponding femur or tibia to obtain the complete segmentation result of the femur and the tibia.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, the electronic device includes one or more processors 61 and a memory 62, where one processor 61 is taken as an example in fig. 6.
The controller may further include: an input device 63 and an output device 64.
The processor 61, the memory 62, the input device 63 and the output device 64 may be connected by a bus or other means, as exemplified by the bus connection in fig. 6.
The Processor 61 may be a Central Processing Unit (CPU), the Processor 61 may also be other general-purpose processors, digital Signal Processors (DSP), application Specific Integrated Circuits (ASIC), field Programmable Gate Arrays (FPGA), other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or any combination thereof, and the general-purpose Processor may be a microprocessor or any conventional Processor.
The memory 62, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the control methods in the embodiments of the present invention. The processor 61 executes various functional applications of the server and data processing, i.e., implements a data processing method for massive risk assessment, by running non-transitory software programs, instructions, and modules stored in the memory 62.
The memory 62 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a processing apparatus operated by the server, and the like. Further, the memory 62 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 62 may optionally include memory located remotely from the processor 61, which may be connected to a network connection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 63 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the processing device of the server. The output device 64 may include a display device such as a display screen.
One or more modules are stored in the memory 62, which when executed by the one or more processors 61 perform the method as shown in fig. 1.
Those skilled in the art will appreciate that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium, and the computer program can include the processes of the embodiments of the motor control methods described above when executed. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM for short), a Random Access Memory (RAM for short), a Flash Memory (FM for short), a Hard Disk (Hard Disk Drive for short), or a Solid-State Drive (SSD for short), etc.; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art can make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (9)

1. A knee joint femur tibia segmentation method, comprising:
acquiring medical image data to be processed;
performing image segmentation on the medical image data to obtain two-dimensional medical image data of the knee joint in the medical image to be processed;
constructing three-dimensional medical image data of the knee joint based on the two-dimensional medical image data of the knee joint;
carrying out corrosion operation on the three-dimensional medical image data of the knee joint to obtain pre-segmentation result image data, wherein the pre-segmentation result image data comprises independent pre-segmentation femur image data and independent pre-segmentation tibia image data;
the method for performing the bone repair operation on the pre-segmentation result image data to obtain the target segmentation image data comprises the following steps of:
comparing the pre-segmentation result image data with the three-dimensional medical image data of the knee joint to obtain lost bone data, wherein the lost bone data is data of corroded bone;
performing expansion processing on the lost bone data to obtain expanded bone data;
calculating the overlapping area of the swelling bone data and the pre-segmentation result image data;
and calculating the overlapping area based on a bone distribution rule, and performing bone filling processing on the pre-segmentation result image data according to a calculation result to obtain the target segmentation image data.
2. The method of claim 1, wherein performing an erosion operation on three-dimensional medical image data of the knee joint to obtain pre-segmented resultant image data comprises:
carrying out corrosion operation on the three-dimensional medical image data of the knee joint to obtain corrosion process image data;
if the image data in the corrosion process meets preset pre-segmentation conditions, obtaining pre-segmentation result image data, wherein the pre-segmentation result image data comprises independent pre-segmentation femur image data and independent pre-segmentation tibia image data;
and if the image data of the corrosion process does not meet the preset pre-segmentation condition, performing corrosion iteration operation on the image data of the corrosion process until the image data of the corrosion process meets the preset pre-segmentation condition.
3. The method of claim 2, wherein obtaining the pre-segmentation result image data if the erosion process image data satisfies the pre-segmentation condition comprises:
identifying the corrosion process image data to obtain tibia corrosion process image data and femur corrosion process image data;
extracting the maximum communication area of the tibia corrosion process image data and the femur corrosion process image data;
if the maximum connected region meets the preset pre-segmentation condition, the image data in the corrosion process meets the preset pre-segmentation condition;
and if the maximum connected region does not meet the preset pre-segmentation condition, the image data in the corrosion process does not meet the preset pre-segmentation condition.
4. The method of claim 3, wherein if the maximum connected region satisfies the pre-determined pre-segmentation condition, the erosion process image satisfying the pre-determined pre-segmentation condition comprises:
identifying the maximum communication area to obtain the highest point and the lowest point of the maximum communication area, wherein the highest point and the lowest point of the maximum communication area are the highest point and the lowest point in a preset direction;
identifying the corrosion process image to obtain a bone highest point and a bone lowest point, wherein the bone highest point is the highest point in the femur and the tibia of the corrosion process image, and the bone lowest point is the lowest point in the femur and the tibia of the corrosion process image;
if the highest point of the maximum communication region is not the highest point of the skeleton, or the lowest point of the maximum communication region is not the lowest point of the skeleton, the maximum communication region meets the preset pre-segmentation condition;
and if the highest point of the maximum communicated area is the highest point of the skeleton, and the lowest point of the maximum communicated area is the lowest point of the skeleton, the maximum communicated area does not meet the preset pre-segmentation condition.
5. The method of claim 1, wherein calculating the overlapping area based on a bone allocation rule, and performing bone filling processing on the pre-segmentation result image data according to the calculation result to obtain target segmentation image data comprises:
identifying the pre-segmentation result image data to obtain pre-segmentation femur image data and pre-segmentation tibia image data;
calculating the overlapping area of the swelling bone data and the pre-segmentation femur image data to obtain a first overlapping area;
calculating the overlapping area of the expanded bone data and the pre-segmentation tibia image data to obtain a second overlapping area;
comparing the first overlapping area to the second overlapping area;
if the first overlapping area is larger than the second overlapping area, filling the lost bone data into the pre-segmentation femur image data to obtain the target segmentation image data;
and if the second overlapping area is larger than the first overlapping area, filling the lost bone data into the pre-segmentation tibia image data to obtain the target segmentation image data.
6. The method according to claim 1, wherein performing image segmentation on the medical image data to be processed to obtain two-dimensional medical image data of a knee joint in the medical image to be processed comprises:
performing image segmentation processing on the medical image data to be processed based on a preset image segmentation model to obtain two-dimensional medical image data of a knee joint in the medical image to be processed;
the preset image segmentation model is obtained based on image data set training, the image data set comprises marked positive sample image data and unmarked negative sample image data, and the positive sample image data comprises marks used for representing a target area.
7. A knee femoral tibial segmentation device, comprising:
the acquisition module is used for acquiring medical image data to be processed;
the image segmentation module is used for carrying out image segmentation on the medical image data to obtain two-dimensional medical image data of the knee joint in the medical image to be processed;
a construction module that constructs three-dimensional medical image data of the knee joint based on the two-dimensional medical image data of the knee joint;
the corrosion module is used for carrying out corrosion operation on the three-dimensional medical image data of the knee joint to obtain pre-segmentation result image data, wherein the pre-segmentation result image data comprises independent pre-segmentation femur image data and independent pre-segmentation tibia image data;
the restoration module is used for carrying out bone restoration operation on the pre-segmentation result image data to obtain target segmentation image data, and comprises:
comparing the pre-segmentation result image data with the three-dimensional medical image data of the knee joint to obtain lost bone data, wherein the lost bone data is data of corroded bone;
performing expansion processing on the lost bone data to obtain expanded bone data;
calculating the overlapping area of the swelling bone data and the pre-segmentation result image data;
and calculating the overlapping area based on a bone distribution rule, and performing bone filling processing on the pre-segmentation result image data according to a calculation result to obtain the target segmentation image data.
8. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the knee femorotibial segmentation method of any one of claims 1 to 6.
9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor to cause the at least one processor to perform the knee femoral tibial segmentation method of any one of claims 1 to 6.
CN202111023781.2A 2021-09-01 2021-09-01 Knee joint femur tibia segmentation method and device Active CN113920128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111023781.2A CN113920128B (en) 2021-09-01 2021-09-01 Knee joint femur tibia segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111023781.2A CN113920128B (en) 2021-09-01 2021-09-01 Knee joint femur tibia segmentation method and device

Publications (2)

Publication Number Publication Date
CN113920128A CN113920128A (en) 2022-01-11
CN113920128B true CN113920128B (en) 2023-02-21

Family

ID=79233792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111023781.2A Active CN113920128B (en) 2021-09-01 2021-09-01 Knee joint femur tibia segmentation method and device

Country Status (1)

Country Link
CN (1) CN113920128B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004081871A1 (en) * 2003-03-12 2004-09-23 Siemens Corporate Research Inc. Image segmentation in a three-dimensional environment
CN108764241A (en) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 Divide method, apparatus, computer equipment and the storage medium of near end of thighbone
CN108898606A (en) * 2018-06-20 2018-11-27 中南民族大学 Automatic division method, system, equipment and the storage medium of medical image
CN110544245A (en) * 2019-08-30 2019-12-06 北京推想科技有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN112017189A (en) * 2020-10-26 2020-12-01 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN112435263A (en) * 2020-10-30 2021-03-02 苏州瑞派宁科技有限公司 Medical image segmentation method, device, equipment, system and computer storage medium
CN112508888A (en) * 2020-11-26 2021-03-16 中国科学院苏州生物医学工程技术研究所 Method and system for quickly and automatically segmenting cerebral artery for medical image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101718868B1 (en) * 2015-09-21 2017-03-22 한국과학기술연구원 Method for forming 3d mazillofacial model by automatically segmenting medical image, automatic image segmentation and model formation server performing the same, and storage mdeium storing the same
CN110689551B (en) * 2019-10-14 2020-07-17 慧影医疗科技(北京)有限公司 Method and device for limb bone segmentation, electronic equipment and readable storage medium
CN112489005B (en) * 2020-11-26 2021-11-09 推想医疗科技股份有限公司 Bone segmentation method and device, and fracture detection method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004081871A1 (en) * 2003-03-12 2004-09-23 Siemens Corporate Research Inc. Image segmentation in a three-dimensional environment
CN108764241A (en) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 Divide method, apparatus, computer equipment and the storage medium of near end of thighbone
CN108898606A (en) * 2018-06-20 2018-11-27 中南民族大学 Automatic division method, system, equipment and the storage medium of medical image
CN110544245A (en) * 2019-08-30 2019-12-06 北京推想科技有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN112017189A (en) * 2020-10-26 2020-12-01 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN112435263A (en) * 2020-10-30 2021-03-02 苏州瑞派宁科技有限公司 Medical image segmentation method, device, equipment, system and computer storage medium
CN112508888A (en) * 2020-11-26 2021-03-16 中国科学院苏州生物医学工程技术研究所 Method and system for quickly and automatically segmenting cerebral artery for medical image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SpineParseNet: Spine Parsing for Volumetric MR Image by a Two-Stage Segmentation Framework With Semantic Image Representation;Shumao Pang et al;《 IEEE Transactions on Medical Imaging 》;20200921;262-273 *
三维CT图像中肩关节分割算法设计与实现;肖悍;《中国优秀硕士学位论文全文数据库(电子期刊)》;20190315;第2019年卷(第03期);全文 *

Also Published As

Publication number Publication date
CN113920128A (en) 2022-01-11

Similar Documents

Publication Publication Date Title
CN109741309B (en) Bone age prediction method and device based on deep regression network
CN112950651B (en) Automatic delineation method of mediastinal lymph drainage area based on deep learning network
CN113076987B (en) Osteophyte identification method, device, electronic equipment and storage medium
CN108053417A (en) A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN112233777A (en) Gallstone automatic identification and segmentation system based on deep learning, computer equipment and storage medium
CN110310280B (en) Image recognition method, system, equipment and storage medium for hepatobiliary duct and calculus
CN110689551B (en) Method and device for limb bone segmentation, electronic equipment and readable storage medium
CN102831614B (en) Sequential medical image quick segmentation method based on interactive dictionary migration
CN111402216B (en) Three-dimensional broken bone segmentation method and device based on deep learning
CN113689402A (en) Deep learning-based femoral medullary cavity form identification method, device and storage medium
CN114998301B (en) Vertebral body sub-region segmentation method and device and storage medium
CN106780491B (en) Initial contour generation method adopted in segmentation of CT pelvic image by GVF method
CN111724389B (en) Method, device, storage medium and computer equipment for segmenting CT image of hip joint
CN113744214A (en) Femoral stem placement method and device based on deep reinforcement learning and electronic equipment
CN114037663A (en) Blood vessel segmentation method, device and computer readable medium
CN110992370A (en) Pancreas tissue segmentation method and device and terminal equipment
CN113241155B (en) Method and system for acquiring mark points in skull side position slice
CN114757908A (en) Image processing method, device and equipment based on CT image and storage medium
CN113920128B (en) Knee joint femur tibia segmentation method and device
CN113838048A (en) Cruciate ligament preoperative insertion center positioning and ligament length calculating method
CN113077418A (en) CT image skeleton segmentation method and device based on convolutional neural network
CN106780492B (en) Method for extracting key frame of CT pelvic image
WO2022111383A1 (en) Ct-based rib automatic counting method and device
CN109509189B (en) Abdominal muscle labeling method and labeling device based on multiple sub-region templates
CN114049358A (en) Method and system for rib case segmentation, counting and positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100176 2201, 22 / F, building 1, yard 2, Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee after: Beijing Changmugu Medical Technology Co.,Ltd.

Patentee after: Zhang Yiling

Address before: 100176 2201, 22 / F, building 1, yard 2, Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee before: BEIJING CHANGMUGU MEDICAL TECHNOLOGY Co.,Ltd.

Patentee before: Zhang Yiling

CP01 Change in the name or title of a patent holder