CN113870261B - Method and system for recognizing force line by using neural network, storage medium and electronic device - Google Patents

Method and system for recognizing force line by using neural network, storage medium and electronic device Download PDF

Info

Publication number
CN113870261B
CN113870261B CN202111455887.XA CN202111455887A CN113870261B CN 113870261 B CN113870261 B CN 113870261B CN 202111455887 A CN202111455887 A CN 202111455887A CN 113870261 B CN113870261 B CN 113870261B
Authority
CN
China
Prior art keywords
image
neural network
tibia
femur
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111455887.XA
Other languages
Chinese (zh)
Other versions
CN113870261A (en
Inventor
黄志俊
刘金勇
钱坤
范昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lancet Robotics Co Ltd
Original Assignee
Lancet Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lancet Robotics Co Ltd filed Critical Lancet Robotics Co Ltd
Priority to CN202111455887.XA priority Critical patent/CN113870261B/en
Publication of CN113870261A publication Critical patent/CN113870261A/en
Application granted granted Critical
Publication of CN113870261B publication Critical patent/CN113870261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Veterinary Medicine (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

A method and a system for assisting in identifying force lines by using a neural network can be used for detecting force lines before and after an operation, assisting the operation and knowing the recovery condition after the operation. The method comprises the following steps: combining CT file sequence lines into a complete three-dimensional image, then slicing from a coronal plane to obtain a first image, inputting the first image into a first multi-classification segmentation neural network based on unet and using a softmax activation function to perform different classifications on the femur and the tibia, and thus segmenting the models of the femur and the tibia at one time; and finding a femoral head center point on the model of the femur using a second segmentation-use neural network to determine a force line, wherein further key physiological points are calculated based on the point cloud: femur knee joint central point, shinbone knee joint central point and shinbone ankle joint central point. Wherein the first image is subjected to bilinear quantization processing so that it can be down-quantized while being up-interpolated by bilinear interpolation.

Description

Method and system for recognizing force line by using neural network, storage medium and electronic device
Technical Field
The invention relates to the technical field of medicine, namely the technical field of computer aided planning for joint replacement and the technical field of medical image data processing, in particular to a method for assisting in identifying a force line by using a neural network and a system for assisting in surgical planning by using the force line by using the neural network, and more particularly relates to a method for calibrating the force line based on image reconstruction, deep learning, numerical algorithm and the like, and particularly redesigns a different past deep learning marking scheme based on a coronal plane.
Background
With the rapid development of modern society, various industries are in intimate contact with the IT industry, and the medical industry is also the same. In the case of knee joint replacement assisted by a surgical robot, it is very important to determine the position of the force line, which may determine the success or failure of the surgery and the recovery status of the patient after the surgery. If the force line is not calibrated correctly, the patient may be caused to cause the lesion again in the postoperative recovery process, and even the whole operation fails completely.
Disclosure of Invention
However, most patients have some deviation in the force lines after the knee joint is diseased compared to healthy ones. In order to correct the direction of the force line, the force line needs to be re-measured, so as to design an intraoperative plan and a postoperative correction plan.
To solve the above problems, the present invention provides a method and system for assisting in identifying a force line using a neural network.
According to an aspect of the present invention, there is provided a method for assisting in identifying a line of force using a neural network, comprising the steps of: obtaining a series of first images by combining a sequence of CT files into a complete three-dimensional image, followed by slicing from the coronal plane; obtaining a second image by carrying out bilinear quantization processing on the first image, wherein the bilinear quantization processing is to adjust the weight value of a pixel on the basis of bilinear interpolation so as to fix the weight of a target image to be 1; inputting a second image containing image data of the coronal planes of the femur and the tibia into a first multi-classification segmentation neural network to perform different classifications on the femur and the tibia, so as to segment a model of the femur and a model of the tibia at one time; and using the neural network for the second segmentation to find key physiological points on the model of the femur and the model of the tibia to determine the force line.
Optionally, a femoral head central point serving as a key physiological point in the model of the femur is identified by using the neural network for the second segmentation, and the other femoral knee joint central point, the tibial knee joint central point and the tibial ankle joint central point serving as key physiological points are determined based on the point cloud data in the three-dimensional virtual image model reconstructed by the neural network for the second segmentation.
In this manner, force lines may be accurately and efficiently determined when performing assisted knee arthroplasty, for example, by a surgical robot, advantageously assisting in preoperative planning.
Accordingly, according to another aspect of the present invention, there is provided a system for assisting knee replacement, which assists preoperative planning by assisting in identifying a force line using a neural network, comprising the following units: a slicing unit configured to obtain a series of first images by combining the sequence of CT files into one complete three-dimensional image and subsequently slicing from the coronal plane; a bilinear quantization processing unit configured to obtain a second image by performing bilinear quantization processing on the first image, wherein the bilinear quantization processing is performed by adjusting a weight value of a pixel on the basis of bilinear interpolation so that the weight of a target image is not fixed to 1; a classification unit configured to input a second image including coronal plane image data of the femur and the tibia into a first multi-classification segmentation neural network to classify the femur and the tibia differently, and to segment a model of the femur and a model of the tibia at one time; and a key physiological point determination unit configured to determine a force line by searching for key physiological points on the model of the femur and the model of the tibia using the second segmentation neural network.
Preferably, the key physiological point determination unit includes: a segmentation module configured to find a femoral head center point serving as a key physiological point on the model of the femur using a second neural network for segmentation; and the point cloud data computing module is used for determining other femur knee joint central points, tibia knee joint central points and tibia ankle joint central points which are used as key physiological points based on point cloud data in the three-dimensional virtual image model reconstructed by the neural network for the second segmentation.
Further, according to still another aspect of the present invention, there is provided as a detection system applicable to assist in knee joint replacement, a surgical planning system capable of identifying and measuring a force line by using the method of any one of the above or the system of any one of the above, thereby being capable of measuring a force line before and after a surgery by using a neural network to assist in preoperative planning, and being capable of assisting in postoperative correction planning by comparing the measured force lines before and after the surgery.
Thus, according to the invention, by providing a different past deep learning marking scheme based on the coronal plane, the force line can be accurately calibrated effectively based on means such as image reconstruction, deep learning, numerical analysis algorithm and the like, and favorable assistance is provided for preoperative and postoperative surgical planning.
Drawings
Fig. 1 schematically shows a schematic diagram of slicing from a coronal plane according to an embodiment of the present invention.
Fig. 2 schematically shows a schematic diagram of bilinear interpolation.
FIG. 3 schematically illustrates an infrastructure flow of a unet employed in accordance with an embodiment of the present invention.
Fig. 4 schematically illustrates data labeling performed on a cross-section according to an embodiment of the present invention.
Fig. 5 schematically shows an anatomical conceptual diagram of how the force lines are determined.
Fig. 6 schematically shows an anatomical conceptual diagram of how the center point of the femoral knee joint is determined.
Fig. 7 schematically shows the size relationship of the center point of the femoral knee joint in world coordinates.
Fig. 8 schematically shows a principle diagram of linear interpolation in one-dimensional space.
Fig. 9 shows a flow chart of a method according to an exemplary embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described in detail below with reference to the accompanying drawings. The exemplary embodiments described below and illustrated in the figures are intended to teach the principles of the present invention and enable one skilled in the art to implement and use the invention in several different environments and for several different applications. The scope of the invention is, therefore, indicated by the appended claims, and the exemplary embodiments are not intended to, and should not be considered as, limiting the scope of the invention.
Referring to fig. 9, according to the present invention, there is provided a method and system for segmenting femur and tibia images using deep learning and determining the position of a force line based on medical definition on the basis of the segmented femur and tibia images, so as to improve the accuracy of the subsequent operation, the method includes the following specific steps:
step 101: combining a CT file sequence into a complete three-dimensional image;
step 102: slicing from the coronal plane to obtain a series of first images;
step 103: obtaining a second image by performing bilinear quantization processing on the first image;
step 104: inputting a second image containing coronal plane image data of the femur and the tibia into a neural network to classify the femur and the tibia differently; and
step 105: key physiological points are sought to determine the line of force.
Preferably, as the image data, a dicom file sequence of CT (Computed Tomography) is reorganized and re-divided, the CT file sequence is combined into one complete three-dimensional image, and then the slice is performed from the coronal plane to obtain the first image.
That is, the method includes the steps of slicing, from a coronal plane, previously obtained raw image data including a femur and a tibia to obtain a first image, using the first image as an input to a multi-class segmentation neural network, classifying the femur and the tibia differently to segment a model of the femur and a model of the tibia at one time, reconstructing a three-dimensional virtual image model using a second segmentation neural network, finding key physiological points on the model of the femur and the model of the tibia to determine a force line, and identifying a femoral head center point as the key physiological point using the second segmentation neural network.
In addition, the key physiological points further comprise a femoral knee joint central point, a tibial knee joint central point and a tibial ankle joint central point, and the femoral knee joint central point, the tibial knee joint central point and the tibial ankle joint central point are determined from the point cloud data of the three-dimensional virtual image model based on the position relationship or the range of the section where the femoral head central point, the femoral knee joint central point, the tibial knee joint central point and the tibial ankle joint central point are located, so that the corresponding force line is determined by using each group of physiological points of the four key physiological points.
An exemplary embodiment according to the present invention is described below taking a CT image as an example.
With respect to two-dimensional CT images having a dicom series format, each file in one series thereof can be regarded as one 2D picture. However, when a segmentation operation is performed on a CT image with such spatial characteristics, each CT slice in a set of parallel bone slices is observed in a transverse plane (also called a transverse plane), see (a) and (B) of fig. 1, so as to determine whether the CT slice belongs to a tibia or a femur, which is a very difficult thing. Therefore, it is very difficult and tedious to identify and determine the critical anatomical points for calibrating the force lines in this manner.
In order to solve the problem, the inventor proposes a technique for slicing a CT image from a coronal plane, and proposes a novel bilinear quantization processing method based on bilinear interpolation aiming at the potential technical problem of the technique. The details are as follows.
< coronal plane slicing and bilinear quantization >
First, the present inventors combined all CT file sequences into one complete three-dimensional image (see (C) of fig. 1). In this manner, slices can be taken from either the coronal or sagittal planes.
According to one embodiment of the present invention, slices from the coronal plane are selected (fig. 1 (C)) to yield a series of first images, taking into account the left and right spatial relationships with respect to the two limbs of the human body.
But then quantization problems arise when segmenting in neural networks. This is because the segmentation task performed in the neural network is inferred by continuously downsampling to extract features and upsampling to decompose and fuse features, that is, the picture length and width should be 0 and at least 4 bits or more in binary representation, and because the most basic segmentation network also downsamples 4 times, it should be at least a multiple of 16 in decimal representation.
For example, when a picture is downsampled, the length and width of the picture are reduced to one half, and if downsampling is performed with a size of 512 × 512, after 4 downsampling, the picture change process should be: 256 × 256 after 1, 128 × 128 after 2, 64 × 64 after 3, 32 × 32 after 4, all sizes are divisible, and when a 4-down-sampling is performed using an image of unusual size, such as a 326 × 326 picture, the process changes: 163 x 163 after 1, 81.5 x 81.5 after 2, floating point numbers are present, the size needs to be quantized, since floating point numbers are unlikely to appear in the pixels making up the picture, the quantization is 81 x 81, some information is lost, 40.5 x 40.5 after 3 downsamplings, floating point numbers appear again, it needs to be quantized again, 40 × 40, 20 × 20 after 4 down-sampling, 2 quantization operations are performed after only 4 down-sampling, a large number of features are lost, the subsequent result is affected, secondly, the engineering aspect is considered, the international familiar deep learning framework such as TensorFlow, pytorch and the like does not support quantization operation, error is directly generated and a program is stopped when the size of a floating point number appears during down-sampling, a developer needs to compile convolution operation by himself, the preparation work of deep learning is greatly increased, and the efficiency of the modeling process is low.
Considering that the size of the CT original (cross section) is 512 x 512, the inventors also quantize the length of the coronal plane to 512, and the quantization mode uses bilinear interpolation because the accuracy of bilinear interpolation is very high and the interpolation speed is fast.
As a comparative example, although the inventors also tried quantization using depth learning, such as sub-pixel convolution or super-resolution reconstruction, the result showed that only for reconstruction at the 512 pixel level, the effect is comparable to that of bilinear interpolation, but the speed is much slower than that of bilinear interpolation, so the use of depth learning for reconstruction is abandoned in the quantization operation, and the interpolation algorithm is used for simple quantization.
As another comparative example, the inventors also tried to use nearest neighbor interpolation, which is probably more efficient, but the results showed that the nearest neighbor interpolation did not work well. After analysis, the inventor considers that the most fundamental reason is that the ratio of the image is seriously unbalanced and the image quality is poor due to the fact that the resolution of 512 by 512 is still too small, and the subsequent training result is poor due to the fact that nearest neighbor interpolation is used.
However, when using bilinear interpolation, the following problems are still encountered: the re-segmented picture is generally smaller than 512, which allows interpolation, but if the number of CT shots exceeds 512, the converted coronal plane will be longer than 512.
In this case, the present inventors redesigned a quantization method, i.e., "bilinear quantization" described later, from bilinear interpolation.
The principle of bilinear interpolation is to calculate the difference between the length and width of a target picture and an original picture during interpolation, determine the difference, calculate the position of each pixel of the target picture in the original picture, and calculate the weight to determine the color of the pixel. For example, assuming that each square of the upper and lower rows shown in fig. 8 represents a vector of one pixel, if one wants to linearly interpolate and extend a one-dimensional vector with a length of 10 to a one-dimensional vector with a length of 15, the pixel relationship between the two vectors is calculated, where the original vector is 10 pixels, and the target vector is 15 pixels, and the first pixel of the target vector occupies two thirds of the first pixel of the original vector (= 10/15= 2/3). Thus, regarding the color filled in the first pixel of the original vector, the second pixel of the target vector accounts for one third of the first pixel of the original vector and one third of the second pixel of the original vector, the color of the second pixel of the target vector is the harmonic value of the color of the first and second pixels of the original vector, the harmonic mode generates the weight according to the ratio of the second pixel of the target vector to the first and second pixels of the original vector, at this time, the weight accounts for 50% on both sides, the weight is 0.5:0.5, the color is filled as the mean value of the first and second pixels of the original vector, and so on, the linear interpolation in the one-dimensional space is performed (see fig. 8).
The bilinear interpolation is a linear interpolation performed on the Y axis after the X axis is considered in the two-dimensional space, and is called as bilinear interpolation (fig. 2). The expression formula is:
Figure 300274DEST_PATH_IMAGE001
(formula 1).
The packing characteristic of the traditional bilinear interpolation results in that the traditional bilinear interpolation can only convert a small image into a large image. On the basis of the above, the inventor makes appropriate changes to the target picture, so that the target picture which is smaller than the original picture can be quantized, and the bilinear interpolation can not only be interpolated upwards, but also be quantized downwards. Thus, such an approach may be more specifically referred to herein as bilinear quantization.
The basic principle of bilinear quantization proposed by the inventor is the same, and the improvement is mainly to consider the weight value of the pixels during the reduction, change the span of 1-2 pixels of the original image into 2-3 pixels, calculate the weight by taking the proportion of the occupied ratio of the pixels, and determine the color according to the weight.
In this way, the present inventors quantized the length of the coronal plane to 512, and applied bilinear quantization processing to the first image to obtain a second image.
< segmentation of femur and tibia by deep learning >
Then, based on the second image, the femur and tibia are segmented using deep learning, and the neural network used is appropriately changed on a unet basis (corresponding to the first neural network).
According to one embodiment, as shown in fig. 3, the structure of the unet is 4 downsampling and 4 upsampling.
For unet-based neural networks, the size of the image input can be arbitrary, but the down-sampling according to the invention is performed 4 times, i.e. the length and width must be all equal
Figure DEST_PATH_IMAGE002
The multiple of (a) does not cause a program error, and it is the reason for processing the data as described above, and the size is fixed in consideration that the standard size of the CT image is all 512 × 512. That is, the size of the second image when input as an image is 512 × 512, and the size is kept consistent with the size of the CT original image.
After a neural network is input, a first convolution structure is performed, 64 convolution kernels of 3 × 3 are used for convolution operation, then, a relu (x) = max (0, x)) activation function is used for activation after a BN (batch normalization) operation is performed on data and distribution, in order to increase a nonlinear structure of the neural network, the convolution structure is a 64-channel convolution structure, downsampling is performed after two convolution structures, and a maximum pooling (maxpool) is used for downsampling, so that a 256 × 256 × 64 feature map is obtained, which is a "first layer" in unet.
Then, the second layer is carried out until the fifth layer, the operation of each layer is the same, the difference is that the number of channels of each layer is increased to twice that of the previous layer, and the output result of each layer is stored until the fifth layer after the fourth layer is carried out, so that a characteristic diagram of 32 × 32 × 1024 is formed.
After that, the up-sampling is performed, as mentioned above, the feature map is saved before each down-sampling, so as to perform concat splicing with the feature map during the up-sampling, which is to make the feature of the original image use the characterization information more accurately, the difference between the up-sampling and the down-sampling at each layer is that the number of channels is increased by one time and decreased by one half, and the rest is completely the same, finally a 64-channel feature map with the length and width consistent with the size of the input original image is obtained, and then after a 2-channel convolution structure is performed again, the activation function is used to perform activation, so that the final output result output is obtained.
In this way, in the unet structure, the maximum pooling maxpool is used for the down-sampling, and the bilinear interpolation is used for the up-sampling, because the cost performance of the interpolation method is high, so called cost performance, the speed is high, and the precision is hardly lost.
The modification is to replace the sigmoid activation function with the softmax activation function for the following reasons.
The sigmoid activation function formula is as follows:
Figure DEST_PATH_IMAGE003
and (formula 2), wherein e is a natural constant e.
It can be seen that the output target of the sigmoid activation function is a constant between 0 and 1, so that a threshold value can be added from the middle, if the threshold value is set to be 0.5, more than 0.5 is a positive sample (correct result), and less than 0.5 is a negative sample (background), but what is needed here is a neural network (background class, femur class, tibia class) capable of completing multi-classification tasks, and the activation function used for outputting the result needs to be replaced, and softmax activation function is used.
The formula of the softmax activation function is:
Figure 640251DEST_PATH_IMAGE004
(formula 3) in the above-mentioned manner,
Figure DEST_PATH_IMAGE006
(formula 4),
wherein x is an input representing the vector of the neural network after the feature processing is completed, e is a natural constant e,
Figure DEST_PATH_IMAGE007
representing the output of each element in the vector after softmax.
Therefore, the final result of the softmax activation function is a plurality of constants, the sum of all the results is 1, the number m of the constants is determined by the number of classification, the number of classification tasks is 3, and m =3, at this time, the largest constant is taken as a correct result in the output sequence, and the category corresponding to the constant is the category of the prediction result at this time.
According to the embodiment of the present invention, if the type of the background is 0, the type of the femur is 1, and the type of the tibia is 2, the prediction result for a certain pixel in a certain picture in the sequence may be [1:0.15,2:0.73,0:0.12], which indicates that the probability of the type 1 (femur) is 15%, the probability of the type 2 (tibia) is 73%, and the probability of the type 0 (background) is 12% (the type 0 is usually placed at the end of the sequence), the probability of the type 2 is the maximum, and the pixel result at this time is the type 2, and the sum of the 3 types is 1.
On the basis of the obtained model of the femur and the model of the tibia, each key central point for determining the force line can be further searched, and then the positions of the femur force line and the tibia force line can be calibrated according to the obtained coordinates and the positions in the model.
More specifically, a model of the femur, a model of the tibia, has been obtained at this time, and according to the medical concept, the femoral force line should be from the femoral head center point 1 to the femoral knee joint center 2, and the tibial force line should be from the tibial knee joint center 3 to the tibial ankle joint center 4 (see fig. 5).
In the above, the femur and the tibia are classified differently, so that the 4 physiological points are more easily found, the following describes a method for finding the femoral head central point 1, the femoral knee joint central point 2, the tibial knee joint central point 3 and the tibial ankle joint central point 4, but the sequence is not limited thereto.
< determination of center point of femoral head >
First, the contact surface of the femoral head and the hip joint is segmented using a segmentation network (corresponding to a second neural network).
As shown in fig. 4, the data is marked as a hemispherical surface, and the data is distributed on each cross section and is marked close to a crescent (a white circle layer in a shape of a letter "C" on the periphery of the femoral head in fig. 4, although not obvious in the figure, has a mark in the picture marking information, and can be easily distinguished by a computer).
The whole process is trained by using a standard unet and is trained from a cross section, and the ruler of the image inputThe size is arbitrary, but down-sampling is performed 4 times, that is, the length and width must be all
Figure 905141DEST_PATH_IMAGE002
The multiple of (2) will not cause program error, and is the reason for processing data in the above, and considering that all the standard sizes of CT images are 512 × 512, the size is fixed, then the input network performs a first convolution structure, performs convolution operation using 64 convolution kernels of 3 × 3, performs BN (batch normal) operation for making data to be distributed, and then performs activation using relu (x) = max (0, x)) activation function in order to increase the nonlinear structure of the neural network, which is a 64-channel convolution structure, performs downsampling after performing convolution structure twice, so as to obtain a 256 × 256 × 64 feature map, which is the "first layer" in the unet, and then performs the second layer, until the fifth layer, the operation of each layer is the same, except that the number of channels of each layer is increased to twice that of the previous layer, and the output result of each layer is stored until the fifth layer after the fourth layer is performed, so that a characteristic diagram of 32 × 32 × 1024 is formed.
After that, the up-sampling is performed, as mentioned above, the feature map is saved before each down-sampling, so as to perform concat splicing with the feature map during the up-sampling, which is to make the feature of the original image use the characterization information more accurately, the difference between the up-sampling and the down-sampling at each layer is that the number of channels is increased by one time and is reduced by one half, and the rest is completely the same, finally a 64-channel feature map with the length and width consistent with the size of the input original image is obtained, and then after a 2-channel convolution structure is performed again, the sigmoid activation function is used to perform activation, so that the final output result output is obtained. After stacking all the results to obtain a spherical surface, 10 points are randomly taken therein, the obtained quaternary quadratic ten-term nonlinear equation set is listed according to the spherical equation, and then the equation is solved by using the least square method. Wherein, the standard formula of the spherical equation is as follows:
Figure DEST_PATH_IMAGE008
(formula 5),
where x, y, z represent X, Y, Z three-axis coordinates of a coordinate point of the sphere center, R represents the radius of the sphere, and a, b, c represent X, Y, Z three-axis coordinates of any point of the sphere.
Because the segmentation network is used to obtain partial spherical points, four unknowns are provided, and under the consideration of accuracy and speed, two schemes are finally determined through multiple deductions and experiments, wherein the first scheme is that all points are substituted into a spherical equation, the effect is very accurate, but the operation consumption resources are too large, even millisecond-level calculation can be improved to a minute level, so that the calculation is abandoned, the second scheme is that points are randomly substituted into the spherical equation for calculation, the accuracy is high, the difference between the result and the first scheme is within 0.01 CT layer thickness unit, through repeated experiments, the inventor considers that 10 points are selected for substitution with the best effect, and finally a quaternary quadratic ten-term nonlinear equation set can be obtained.
In order to improve the accuracy of the process, 100 loop iterations are performed, and then the average value of all the results is taken as a final result, and by the method, the error of the original 0.01 CT layer thickness unit is reduced to an unrepresentable degree, so that the error can be ignored.
Thus, the coordinates of the femoral head center point are obtained through the above processing.
Thus, the 3D model reconstructed by the neural network is composed of the point cloud, so that after the coordinates of the femoral head central point are identified, other key points can be calculated from the point cloud data based on the position relationship between the femoral head central point and the other three key physiological points, namely the femoral knee joint central point, the tibial knee joint central point and the tibial ankle joint central point or the characteristics of the respective section ranges.
< determination of center point of femoral Knee >
As described above, the classification result of the pixel points representing the femur in the whole CT result is 1, all the pixel points with the classification of 1 are integrated, and according to the integrated result, the coordinate points of the lower quarter portion of the cross section are calculated by using a program.
Then, the sagittal plane position can be obtained by finding the maximum value and the minimum value of the screened data in the sagittal plane and calculating the mean value.
Then, the minimum value in the cross section is selected to obtain the center point of the femoral knee joint (the cross dashed line intersection in fig. 6), the magnitude relation in the world coordinate is shown in fig. 7, the arrow direction is a positive direction, the coordinate system is the same as the XYZ coordinate system in mathematics, the X direction is a sagittal plane direction axis, the Y direction is a coronal plane direction axis, and the Z direction is a transverse plane direction axis.
< determination of the center point of the tibial Knee >
As described above, the classification result of the pixel points representing the tibia in the entire CT result is 2, and all the pixel points having the classification of 2 are integrated to find the half portion on the cross section.
And then, carrying out mean value calculation on the maximum value and the minimum value on the sagittal plane to obtain the sagittal plane coordinate.
Then, the maximum coordinate on the cross section is searched, and the central point of the tibia knee joint is found.
< determination of tibial ankle center Point >
Similarly, the tibial ankle center point is the lower half of the tibial classification labeled 2, and the mean of the maximum and minimum values in the sagittal plane is also found to obtain the minimum value in the transverse plane as the tibial ankle center point.
After all the required physiological point coordinates are obtained as described above, the positions of the femoral force line and the tibial force line can be calibrated according to the obtained coordinates and the positions in the model.
In summary, according to the present invention, a method or system configured as follows is provided.
(1) A method of using a neural network to assist in identifying a line of force, comprising the steps of:
obtaining a series of first images by combining the sequence of CT files into one complete three-dimensional image, followed by slicing from the coronal plane;
obtaining a second image by performing bilinear quantization processing on the first image, wherein the bilinear quantization processing is to adjust the weight value of a pixel on the basis of bilinear interpolation so that the weight of a target image is not fixed to be 1 any more;
inputting the second image containing the coronal plane image data of the femur and the tibia into a first multi-classification segmentation neural network to perform different classifications on the femur and the tibia, so as to segment the model of the femur and the model of the tibia at one time; and
using a second segmentation neural network, finding key physiological points on the model of the femur and the model of the tibia to determine a force line.
Wherein a femoral head central point, which is a key physiological point, in the model of the femur can be identified by using the second segmentation-use neural network, and
and determining other femur knee joint central points, tibia knee joint central points and tibia ankle joint central points which are used as key physiological points based on point cloud data in the three-dimensional virtual image model reconstructed by the second segmentation neural network. Therefore, the corresponding relation of the four key physiological points can be used for respectively confirming the relevant force lines.
(2) The first image and the second image are both 512 x 512 pixels in size.
(3) And inputting the second image into the unet-based neural network for the first multi-classification segmentation to perform segmentation on the femur image and the tibia image, wherein the down-sampling uses max-pooling maxpool, the up-sampling uses bilinear interpolation, and a softmax activation function is used for outputting a third image which is used for completing task classification and is related to the femur image and the tibia image.
(4) And inputting the third image into the second neural network for segmentation, segmenting the contact surface between the femoral head and the hip joint, wherein the second neural network for segmentation is the same as the first neural network for multi-class segmentation, but is activated by using a sigmoid activation function instead of the softmax activation function to obtain a final output result output as a fourth image, and the feature map is saved before down-sampling each time and is concat-spliced with the feature map during up-sampling.
(5) And dividing the approximate hemisphere of the femoral head by using the neural network for second division, and taking the sphere center fitted by the least square method as the femoral head central point. And randomly taking 10 points in the approximate hemisphere, listing the obtained quaternary quadratic ten-term nonlinear equation set according to a spherical equation, solving the equation by using a least square method, and determining the mean value of the cyclic iteration result as the center point of the femoral head.
(6) And classifying the pixels representing the thighbone based on the second segmentation neural network, calculating a point cloud data set of coordinate points of the lower quarter thighbone part of the cross section, calculating a mean value of a maximum value and a minimum value of position coordinates on a sagittal plane, and selecting the minimum value on the cross section to obtain the center point of the thighbone knee joint.
(7) And calculating a point cloud data set of coordinate points of the upper half of the tibia part of the cross section based on the pixel points classified as representing the tibia by the second segmentation neural network, calculating a mean value of a maximum value and a minimum value of position coordinates on a sagittal plane, and selecting the maximum value on the cross section to obtain a tibia knee joint center point.
(8) And calculating a coordinate point of the lower half of the tibia part of the cross section based on the pixel points classified as representing the tibia by the second segmentation neural network, and calculating the minimum value of the mean value of the maximum value and the minimum value of the position coordinates on the sagittal plane to obtain the tibia ankle joint central point.
(9) A system for assisting in identifying a line of force using a neural network, comprising the following elements:
a slicing unit for obtaining a series of first images by combining the sequence of CT files into a complete three-dimensional image and subsequently slicing from the coronal plane;
a bilinear quantization processing unit which obtains a second image by performing bilinear quantization processing on the first image, wherein the bilinear quantization processing is to adjust the weight value of the pixel on the basis of bilinear interpolation so as to fix the weight of the target image to 1 no longer;
a classification unit which inputs the second image containing the coronal plane image data of the femur and the tibia into a first multi-classification segmentation neural network to perform different classifications on the femur and the tibia, so as to segment a model of the femur and a model of the tibia at one time; and
and a key physiological point determining unit which uses a second segmentation neural network to find key physiological points on the model of the femur and the model of the tibia to determine a force line.
Wherein, the key physiological point determining unit may include:
a segmentation module that finds a femoral head center point as a key physiological point on a model of a femur using a second segmentation-use neural network; and
and the point cloud data calculation module is used for determining other femur knee joint central points, tibia knee joint central points and tibia ankle joint central points which are used as key physiological points based on point cloud data in the three-dimensional virtual image model reconstructed by the neural network for the second segmentation.
Among them, as a detection system applicable to the assisted knee replacement, that is, as a surgical planning system, it is possible to identify and measure a force line by using any one of the above-described methods or any one of the above-described systems, and thereby it is possible to measure a force line before and after an operation by using a neural network assisted identification force line, and thereby it is possible to assist a post-operation correction plan by detecting an assisted preoperative plan, and by comparing and analyzing a difference between the measured preoperative and postoperative force lines.
(10) A computer-readable storage medium storing a computer program for performing the steps of any of the above methods.
(11) An electronic device comprising a processor and a memory for storing processor-executable instructions, the processor being configured to read the executable instructions from the memory and execute the instructions to implement the steps of any of the methods described above.
The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
The memory may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
In addition to the above-described methods and system devices, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in a method according to an embodiment of the present disclosure described in the "exemplary methods" section of this specification above.
It will be understood by those skilled in the art that the terms "first", "second", "third", etc. in the embodiments of the present invention are used only for distinguishing between different steps, devices or units, etc., and do not denote any particular technical meaning or necessarily order therebetween. It should also be understood that in embodiments of the present invention, "a plurality" may refer to two or more.
While the invention has been described with reference to various specific embodiments, it should be understood that changes can be made within the spirit and scope of the inventive concepts described. Accordingly, it is intended that the invention not be limited to the described embodiments, but that it will have the full scope defined by the language of the following claims.

Claims (10)

1. A method of using a neural network to assist in identifying a line of force, comprising the steps of:
a series of first images are obtained by combining a sequence of CT files obtained from a cross-sectional view into a complete three-dimensional image, followed by slicing from the coronal plane;
obtaining a second image by performing bilinear quantization processing on the first image, wherein the bilinear quantization processing is to adjust the weight value of a pixel on the basis of bilinear interpolation, the weight value of the pixel is considered when the image is reduced, the span of 1-2 pixels of the original image is changed into 2-3 pixels, and the proportion of the occupied ratio of the pixels is still taken to calculate the weight, so that the weight of the target image is not fixed to 1, and thus, the target image which is smaller relative to the original image can be quantized, the bilinear interpolation can not only perform upward interpolation, but also perform downward quantization, and the size of the image can be quantized when the size of a floating point appears in the change process of the downsampling of the image when the image is segmented in a neural network;
inputting the second image containing the coronal image data of the femur and the tibia into a first multi-classification neural network for segmentation, so as to classify the femur and the tibia differently, and thereby segment the model of the femur and the model of the tibia at one time, wherein the second image is input into the first multi-classification neural network for segmentation based on the unet, so as to segment the femur image and the tibia image, wherein the down-sampling uses max pooling maxpool, the up-sampling uses bilinear interpolation, and a softmax activation function is used for outputting a third image which is used for completing task classification and relates to the femur image and the tibia image; and
determining a force line by searching key physiological points on the model of the femur and the model of the tibia by using a second segmentation neural network, wherein the third image is input into the second segmentation neural network, the contact surface between the femur and the hip joint is segmented, the second segmentation neural network is the same as the first multi-class segmentation neural network, but is activated by using a sigmoid activation function instead of a softmax activation function to obtain a final output result output as a fourth image, and the feature map is saved before each down-sampling for concat splicing with the feature map during the up-sampling,
identifying a femoral head center point as a key physiological point in a model of the femur by using the second segmentation-use neural network, and
and determining other femur knee joint central points, tibia knee joint central points and tibia ankle joint central points which are used as key physiological points based on point cloud data in the three-dimensional virtual image model reconstructed by the neural network for the second segmentation.
2. The method of claim 1,
the first image and the second image are both 512 x 512 pixels in size.
3. The method of claim 1,
and dividing the approximate hemisphere of the femoral head by using the neural network for second division, taking the sphere center fitted by the least square method as the central point of the femoral head, wherein 10 points are randomly selected in the approximate hemisphere, listing the obtained quaternary quadratic ten-term nonlinear equation set according to the spherical equation, solving the equation by using the least square method, and determining the average value of the cyclic iteration result as the central point of the femoral head.
4. The method according to claim 1 or 2, further comprising the steps of:
and classifying the pixels representing the thighbone based on the second segmentation neural network, calculating a point cloud data set of coordinate points of the lower quarter thighbone part of the cross section, calculating a mean value of a maximum value and a minimum value of position coordinates on a sagittal plane, and selecting the minimum value on the cross section to obtain the center point of the thighbone knee joint.
5. The method according to claim 1 or 2, further comprising the steps of:
and calculating a point cloud data set of coordinate points of the upper half of the tibia part of the cross section based on the pixel points classified as representing the tibia by the second segmentation neural network, calculating a mean value of a maximum value and a minimum value of position coordinates on a sagittal plane, and selecting the maximum value on the cross section to obtain a tibia knee joint center point.
6. The method according to claim 1 or 2, further comprising the steps of:
and calculating a coordinate point of the lower half of the tibia part of the cross section based on the pixel points classified as representing the tibia by the second segmentation neural network, and calculating the minimum value of the mean value of the maximum value and the minimum value of the position coordinates on the sagittal plane to obtain the tibia ankle joint central point.
7. A system for assisting in identifying a line of force using a neural network, configured to implement the method of any one of claims 1 to 6.
8. A detection system for assisting knee joint replacement which assists preoperative and postoperative force lines by determining the preoperative and postoperative force lines using neural network assistance for identifying the force lines, thereby assisting preoperative planning, and can assist postoperative correction planning by comparing the determined preoperative and postoperative force lines,
wherein the method of any one of claims 1 to 6 is used to identify and measure a force line.
9. A computer-readable storage medium storing a computer program for performing the steps of the method of any one of claims 1 to 6.
10. An electronic device comprising a processor and a memory for storing processor-executable instructions, the processor being configured to read the executable instructions from the memory and execute the instructions to implement the steps of the method of any one of claims 1 to 6.
CN202111455887.XA 2021-12-01 2021-12-01 Method and system for recognizing force line by using neural network, storage medium and electronic device Active CN113870261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111455887.XA CN113870261B (en) 2021-12-01 2021-12-01 Method and system for recognizing force line by using neural network, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111455887.XA CN113870261B (en) 2021-12-01 2021-12-01 Method and system for recognizing force line by using neural network, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN113870261A CN113870261A (en) 2021-12-31
CN113870261B true CN113870261B (en) 2022-05-13

Family

ID=78985437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111455887.XA Active CN113870261B (en) 2021-12-01 2021-12-01 Method and system for recognizing force line by using neural network, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN113870261B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993824A (en) * 2023-07-19 2023-11-03 北京长木谷医疗科技股份有限公司 Acetabular rotation center calculating method, device, equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652888A (en) * 2020-05-25 2020-09-11 北京长木谷医疗科技有限公司 Method and device for determining medullary cavity anatomical axis based on deep learning
CN113298786A (en) * 2021-05-26 2021-08-24 北京长木谷医疗科技有限公司 Image recognition and model training method, and true mortar position recognition method and device
CN113658142A (en) * 2021-08-19 2021-11-16 江苏金马扬名信息技术股份有限公司 Hip joint femur near-end segmentation method based on improved U-Net neural network

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104970904A (en) * 2014-04-14 2015-10-14 陆声 Individualized positioning template design for total knee prosthesis replacement on basis of MRI
CN105361883A (en) * 2014-08-22 2016-03-02 方学伟 Method for determining lower limb biological force line in three-dimensional space for total knee arthroplasty
CN108042217A (en) * 2017-12-21 2018-05-18 成都真实维度科技有限公司 A kind of definite method of three dimensions lower-limbs biology force-line
CN110613469B (en) * 2019-09-18 2020-09-15 北京理工大学 Automatic leg bone and lower limb force line detection method and device
CN111768400A (en) * 2020-07-07 2020-10-13 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111882532B (en) * 2020-07-15 2021-10-01 中国科学技术大学 Method for extracting key points in lower limb X-ray image
CN113017829B (en) * 2020-08-22 2023-08-29 张逸凌 Preoperative planning method, system, medium and device for total knee arthroplasty based on deep learning
CN112842529B (en) * 2020-12-31 2022-02-08 北京长木谷医疗科技有限公司 Total knee joint image processing method and device
CN112957126B (en) * 2021-02-10 2022-02-08 北京长木谷医疗科技有限公司 Deep learning-based unicondylar replacement preoperative planning method and related equipment
CN112971981B (en) * 2021-03-02 2022-02-08 北京长木谷医疗科技有限公司 Deep learning-based total hip joint image processing method and equipment
CN113076987B (en) * 2021-03-29 2022-05-20 北京长木谷医疗科技有限公司 Osteophyte identification method, device, electronic equipment and storage medium
CN113689402B (en) * 2021-08-24 2022-04-12 北京长木谷医疗科技有限公司 Deep learning-based femoral medullary cavity form identification method, device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652888A (en) * 2020-05-25 2020-09-11 北京长木谷医疗科技有限公司 Method and device for determining medullary cavity anatomical axis based on deep learning
CN113298786A (en) * 2021-05-26 2021-08-24 北京长木谷医疗科技有限公司 Image recognition and model training method, and true mortar position recognition method and device
CN113658142A (en) * 2021-08-19 2021-11-16 江苏金马扬名信息技术股份有限公司 Hip joint femur near-end segmentation method based on improved U-Net neural network

Also Published As

Publication number Publication date
CN113870261A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
JP5417321B2 (en) Semi-automatic contour detection method
CN113313234A (en) Neural network system and method for image segmentation
CN112639880A (en) Automatic determination of canonical poses of 3D objects and automatic superimposition of 3D objects using deep learning
EP3610456A1 (en) Recist assessment of tumour progression
US20200349699A1 (en) System and method for segmentation and visualization of medical image data
US20210287454A1 (en) System and method for segmentation and visualization of medical image data
CN109949280B (en) Image processing method, image processing apparatus, device storage medium, and growth evaluation system
CN115004223A (en) Method and system for automatic detection of anatomical structures in medical images
CN113506308B (en) Deep learning-based vertebra positioning and spine segmentation method in medical image
CN113096137B (en) Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN110634133A (en) Knee joint orthopedic measurement method and device based on X-ray plain film
CN113870261B (en) Method and system for recognizing force line by using neural network, storage medium and electronic device
Zhao et al. Automatic Cobb angle measurement method based on vertebra segmentation by deep learning
Boutillon et al. Generalizable multi-task, multi-domain deep segmentation of sparse pediatric imaging datasets via multi-scale contrastive regularization and multi-joint anatomical priors
CN114757908A (en) Image processing method, device and equipment based on CT image and storage medium
CN113838048A (en) Cruciate ligament preoperative insertion center positioning and ligament length calculating method
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
CN114787816A (en) Data enhancement for machine learning methods
WO2019167882A1 (en) Machine learning device and method
CN116485853A (en) Medical image registration method and device based on deep learning neural network
CN112884706B (en) Image evaluation system based on neural network model and related product
EP2534613B1 (en) Image analysis
CN113888751A (en) Method and device for identifying key points of joints and computer equipment
CN112581513B (en) Cone beam computed tomography image feature extraction and corresponding method
Alshamrani et al. Automation of Cephalometrics Using Machine Learning Methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant