CN117058149B - Method for training and identifying medical image measurement model of osteoarthritis - Google Patents

Method for training and identifying medical image measurement model of osteoarthritis Download PDF

Info

Publication number
CN117058149B
CN117058149B CN202311318617.3A CN202311318617A CN117058149B CN 117058149 B CN117058149 B CN 117058149B CN 202311318617 A CN202311318617 A CN 202311318617A CN 117058149 B CN117058149 B CN 117058149B
Authority
CN
China
Prior art keywords
medical image
image measurement
measurement model
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311318617.3A
Other languages
Chinese (zh)
Other versions
CN117058149A (en
Inventor
覃皓程
倪江东
杨舒
何桄旭
胡洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202311318617.3A priority Critical patent/CN117058149B/en
Publication of CN117058149A publication Critical patent/CN117058149A/en
Application granted granted Critical
Publication of CN117058149B publication Critical patent/CN117058149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method for training and identifying a medical image measurement model of osteoarthritis, which comprises the following steps: acquiring a radiation image data set, and dividing the radiation image data set into a training set and a verification set; extracting image features, and acquiring feature matrix vectors of several features marked by the image samples by using a feature extractor; constructing a medical image measurement model, and training the medical image measurement model by using the acquired feature matrix vector until the loss function is converged to the minimum; and testing the trained medical image measurement model, and obtaining the prediction result of the trained medical image measurement model for identifying the osteoarthritis and the confidence coefficient of the prediction result. According to the method, through four steps of data set preparation, feature extraction, model training and model test and evaluation, a medical image measurement model capable of accurately identifying a suspected arthritis sample in an x-ray film is trained, and therefore the accuracy of the medical image measurement model in identifying osteoarthritis is improved.

Description

Method for training and identifying medical image measurement model of osteoarthritis
Technical Field
The invention relates to the technical field of image and video processing, in particular to a method for training and identifying a medical image measurement model of osteoarthritis.
Background
Osteoarthritis is a total joint disease with structural changes in hyaline articular cartilage, subchondral bone, ligaments, joint capsules, synovium and periarticular muscles. The complex pathogenesis of osteoarthritis ultimately leads to structural destruction and failure of the joint. The disease is an active dynamic change caused by an imbalance between repair and destruction of arthritis caused by inflammatory, mechanical and metabolic factors. The method for diagnosing knee osteoarthritis adopts standard knee joint x-ray films to compare the radiographic image to be diagnosed so as to determine the disease type of the image to be diagnosed. However, knee x-ray films in combination with current grading systems are not sensitive enough to detect early signs of disease progression.
With the development and maturity of artificial intelligence technology, artificial intelligence technology has been gradually popularized in various aspects of the medical field. In particular, medical imaging in medicine is a popular field for the application of artificial intelligence technology at present. Medical imaging is a useful tool for diagnosing many diseases, a large amount of medical image data is generated in the medical imaging process, a great deal of time is required for a doctor to process and recognize the image data, and it is difficult to ensure the accuracy of recognition. In medical images, tissue lesions are identified in the images mainly by using artificial intelligence technology, so that accuracy of tissue lesions identification is improved. For example, patent number 202011359970.2 discloses a training method and training system for tissue lesion recognition based on an artificial neural network, comprising: preparing a training data set, wherein the training data set comprises a plurality of examination images with lesions and labeling images with lesion labeling results, which are associated with the examination images; feature extraction is performed on the inspection image to obtain a feature map, and the inspection image is processed based on an attention mechanism to obtain an attention heat map; classifying the inspection images by using a first artificial neural network, obtaining a first loss function by combining the marked images, classifying the inspection images by using a second artificial neural network module based on the feature images and the attention heat images, obtaining a second loss function by combining the marked results, and judging whether the inspection images are ill or not by using a third artificial neural network to obtain a third loss function. By combining the three loss functions, the accuracy of identifying the tissue lesions can be effectively improved. However, this method is not suitable for diagnosis of osteoarthritis disease. In view of this, there is a need in the industry for a method of training a medical image measurement model for identifying osteoarthritis to train a medical image measurement model that can accurately identify the progressive signs of osteoarthritis.
Disclosure of Invention
The invention aims to provide a method for training a medical image measurement model for identifying osteoarthritis, so as to solve the problems in the background art.
To achieve the above object, the present invention provides a method for training a medical image measurement model for identifying osteoarthritis, the method comprising the steps of:
s1, acquiring a radiation image data set comprising a normal knee joint and osteoarthritis marked on the knee joint, and dividing the radiation image data set into a training set and a verification set, wherein the training set and the verification set comprise image sample batches for training the medical image measurement model;
s2, inputting the joint gap width characteristics, the subchondral bone strength characteristics and the joint line convergence angle characteristics marked in the image sample batch into a characteristic extractor to perform characteristic extraction so as to obtain a characteristic matrix vector with a preset dimension;
s3, firstly constructing a medical image measurement model, and then training the medical image measurement model by utilizing the acquired feature matrix vector until a loss function is converged to the minimum; the medical image measurement model consists of a neural network model and an attention mechanism network which are connected in parallel in a crossing way, wherein the neural network model comprises a convolution layer, a pooling layer, an encoder and a jump connection layer; the feature matrix vector is simultaneously input into the neural network model and the attention mechanism network, so that a first output and a second output are obtained, linear layer conversion is performed after the first output and the second output are multiplied, and then a result of the medical image measurement model for identifying osteoarthritis is obtained; the loss function is a Smooth Top1SVM () loss function for a smoothed version of the Top1SVM loss function, and the expression is:
where L is a loss function, k and τ are SVM training parameters, s and y are input eigenvector matrix vectors,jis a positive integer greater than 1;
s4, testing the trained medical image measurement model based on the verification set, so as to obtain the identification result and the result confidence coefficient of the medical image measurement model for osteoarthritis; if the confidence coefficient of the obtained result meets a preset confidence coefficient threshold value, determining the trained medical image measurement model as a medical image measurement model for identifying osteoarthritis; and if the obtained result confidence coefficient does not meet the preset confidence coefficient threshold value, adjusting the hyper-parameters of the medical image measurement model until the result confidence coefficient verified by the medical image measurement model meets the preset confidence coefficient threshold value.
Further, the method for training a medical image measurement model for identifying osteoarthritis further comprises the following steps:
s5, dividing the radiological image data set into six types, namely normal joint, suspicious narrowing of joint gap with the possibility of existence of osteophytes, obvious osteophytes with suspicious narrowing of joint gap, moderate amount of osteophytes with obvious narrowing of joint gap, hardening change of a large number of osteophytes, obvious narrowing of joint gap, serious hardening lesions and obvious deformity, based on the width characteristic of joint gap, the intensity characteristic of subchondral bone and the convergence angle characteristic of joint lines; based on the trained medical image measurement model, a respective probability is output that the input image samples fall under each type.
Further, in the step S2, the specific steps of inputting the joint space width feature, the subchondral bone strength feature and the joint line convergence angle feature, which are marked in the image sample batch, to the feature extractor to perform feature extraction are as follows:
s2.1, defining a region of interest on the radiological image data set, wherein the region of interest is a region formed by a tibia-femur line, a contact femur condyle line, a contact tibia line and a tibia-femur intermediate line;
s2.2, defining a knee joint boundary based on the defined region of interest, wherein the knee joint boundary is defined based on a contact femoral condyle line and a contact tibial line;
s2.3, calculating joint gap width characteristics based on knee joint boundaries, and fitting a plurality of joint circles on the knee joint boundaries, wherein the joint circles with the smallest diameters are used as the joint gap width characteristics;
s2.4, calculating the intensity characteristics of the subchondral bone based on the width characteristics of the joint gap determined in the step S2.3; and based on the angle between the contact femoral condyle line and the contact tibial line, the joint line convergence angle characteristic is finally determined.
Further, in the step S3, the specific step of training the medical image measurement model until the loss function converges to the minimum is:
s3.1, the vector input layer inputs the feature matrix vector into the convolution layer to obtain a convolution result of the feature matrix; wherein the convolution layer is composed of a plurality of convolution kernels;
s3.2, performing downsampling operation on the convolution result by using a pooling layer, so as to obtain a pooling result which reduces the size of the feature map and retains the features;
s3.3, inputting the pooling result into an encoder formed by alternately stacking a convolution layer and a pooling layer, so as to gradually extract abstract features in the feature matrix vector;
s3.4, inputting the abstract features extracted step by step into a decoder composed of a plurality of deconvolution layers and up-sampling operation, so as to reconstruct the abstract features into mapping features;
s3.5, connecting the encoder with the decoder by using a jump connection layer, so as to transmit the low-level characteristic information and the high-level characteristic information of the encoder to the decoder;
and S3.6, outputting a prediction result of the medical image measurement model for osteoarthritis identification based on the reconstructed mapping characteristics.
Further, training the medical image measurement model until the loss function converges to a minimum further comprises:
determining a window size and a stride of the downsampling operation based on a maximum pooling method and parameters of the pooling layer;
inserting deconvolution layers for determining the window size and stride of the upsampling operation in the neural network model, thereby performing bilinear interpolation on a feature matrix;
an activation function layer, a normalization layer and a discarding layer are inserted into the neural network model, so that part of the characteristic diagrams are randomly discarded in the training process, the risk of overfitting is reduced, and the representation capability and training stability of the model are enhanced.
Further, training the medical image measurement model until the loss function converges to a minimum further comprises:
calculating a distance between the output of the medical image measurement model and the labeled label using the cross entropy loss function as a loss function; the distance of the trained medical image measurement model is updated by a back propagation algorithm until the loss function converges to a minimum.
Further, in the step S4, the method for adjusting the hyper-parameters of the medical image measurement model includes:
dividing the verification set of the radiation image data set into a plurality of subsets, and selecting one subset as a model verification set and the rest subsets as model training sets;
performing cross-validation, wherein in each cross-validation, the model training set is trained with the trained medical image measurement model and evaluated on the model validation set to determine performance metrics of the trained medical image measurement model;
recording the determined performance index of each cross verification to obtain an independent performance evaluation result;
averaging the performance indicators determined by performing cross-validation to determine an average performance assessment of the trained medical image measurement model;
based on the determined average performance assessment, model hyper-parameters are selected for the trained model.
Compared with the prior art, the invention has the following beneficial effects:
the invention discloses a method for training and identifying a medical image measurement model of osteoarthritis, which comprises the following steps of: acquiring a radiation image data set, and dividing the radiation image data set into a training set and a verification set; extracting image features, and acquiring feature matrix vectors of several features marked by the image samples by using a feature extractor; constructing a medical image measurement model, and training the medical image measurement model by using the acquired feature matrix vector until the loss function is converged to the minimum; and testing the trained medical image measurement model, and obtaining the prediction result of the trained medical image measurement model for identifying the osteoarthritis and the confidence coefficient of the prediction result. According to the method, through four steps of data set preparation, feature extraction, model training and model test and evaluation, training of a medical image measurement model is achieved, so that a suspected arthritis sample in an x-ray film can be accurately identified, and the accuracy of the medical image measurement model in identifying osteoarthritis is improved.
In addition to the objects, features and advantages described above, the present invention has other objects, features and advantages. The invention will be described in further detail with reference to the accompanying drawings.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain, without limitation, the embodiments of the invention. In the drawings:
FIG. 1 is a flow chart of a method of the present invention for training a medical image measurement model for the identification of osteoarthritis;
fig. 2 is a schematic diagram of an input image sample according to one embodiment of the present invention.
Detailed Description
Embodiments of the invention are described in detail below with reference to the attached drawings, but the invention can be implemented in a number of different ways, which are defined and covered by the claims.
Referring to fig. 1, the present embodiment provides a method for training a medical image measurement model for identifying osteoarthritis, the method comprising the following specific steps:
1. data set preparation phase: acquiring a radiological image dataset comprising normal knee joint and osteoarthritis annotations, and dividing the radiological image dataset into a training set and a verification set; both the training set and the verification set comprise image sample batches for training a medical image measurement model.
2. Feature extraction: the joint space width feature, subchondral bone strength feature and joint line convergence angle feature are extracted from the image sample batch, and the extracted features are input into a feature extractor to perform feature extraction operation, and are converted into feature matrix vectors of predetermined dimensions. The method comprises the following specific steps: defining a region of interest on the radiological image dataset, the region of interest being a region composed of a tibia-femur line, a contact femur condyle line, a contact tibia line, and a tibia-to-femur intermediate line; defining a knee boundary based on the defined region of interest, the knee boundary being defined based on the contact femoral condyle line and the contact tibial line; calculating a joint gap width feature based on the knee boundary, fitting a plurality of joint circles on the knee boundary, wherein the joint circle of the smallest diameter is used as the joint gap width feature; calculating subchondral bone strength characteristics based on the joint space width characteristics determined in the above steps; and based on the angle between the contact femoral condyle line and the contact tibial line, the joint line convergence angle characteristic is finally determined.
3. Model training stage: a medical image measurement model is constructed, and the medical image measurement model is formed by intersecting and connecting a neural network model and an attention mechanism network in parallel, wherein the neural network model comprises an input layer, a convolution layer, a pooling layer, an encoder, a decoder, a jump connection layer, an output layer and the like. And (3) training the medical image measurement model by taking the feature matrix vector obtained in the step (2) as input data until the loss function converges to the minimum. The feature matrix vector is input into the neural network model and the attention mechanism network at the same time, namely the neural network model and the attention mechanism network receive the same feature matrix vector as input, so that a first output and a second output are obtained; the obtained first output and second output are multiplied and converted by a linear layer to obtain a predicted result of the medical image measurement model for identifying osteoarthritis. The method comprises the following specific steps: the vector input layer inputs the feature matrix vector into the convolution layer to obtain the convolution result of the feature matrix; wherein the convolution layer is composed of a plurality of convolution kernels; performing downsampling operation on the convolution result by using a pooling layer, so as to obtain a pooling result which reduces the size of the feature map and reserves the features; inputting the pooling result into an encoder formed by alternately stacking a convolution layer and a pooling layer, so as to gradually extract abstract features in the feature matrix vector; inputting the abstract features extracted step by step into a decoder composed of a plurality of deconvolution layers and up-sampling operation, thereby reconstructing the abstract features into mapping features; connecting the encoder with the decoder using the skip connection layer, thereby delivering low-level and high-level characteristic information of the encoder to the decoder; based on the reconstructed mapping features, a prediction result of the medical image measurement model for osteoarthritis identification is output.
In a specific embodiment, the convolutional layer of the neural network model may be: 3 convolution kernels, each convolution kernel being 3×3 in size, 1 in step size, and 1 in padding; pooling layer: maximum pooling, wherein the size of a pooling window is 2 multiplied by 2, and the step length is 2; an encoder: using 3 encoder blocks, each containing two convolutional layers and one batch normalization layer, the activation function is ReLU; jump connection layer: connecting the output of the encoder with the input of the decoder; output layer: linear layer conversion, the dimension of the output result is 1. The dimension of the input feature matrix vector of the attention mechanism network is N, a multi-head self-attention mechanism is used, and the number of heads is 4; the attention calculation in the head includes: query vector dimension dq=d/4, key vector dimension dk=d/4, value vector dimension dv=d/4, attention weight calculation adopts a dot product attention calculation method, and the dimension of the attention weighted output vector is D/4. The loss function may, for example, use SmoothTop1SVM () loss function. The smoothTop1SVM () penalty function is a smoothed version of the Top-1SVM penalty function, expressed as:
in the above formula: l is a loss function, k and τ are SVM training parameters, s and y are feature matrix vectors of the input,jis a positive integer greater than 1.
4. Model test and evaluation stage: and testing the trained medical image measurement model by using the verification set, so as to obtain the identification result and the result confidence coefficient of the medical image measurement model for osteoarthritis. Confidence threshold judgment: and comparing the confidence coefficient of the result obtained according to the test with a preset confidence coefficient threshold value. And if the confidence coefficient of the obtained result meets a preset confidence coefficient threshold, determining the trained medical image measurement model as a medical image measurement model for identifying osteoarthritis. If the obtained result confidence coefficient does not meet the preset confidence coefficient threshold value, the super parameters of the medical image measurement model, such as the learning rate, the batch size and the like, are adjusted, and then model training and testing are conducted again until the result confidence coefficient on the verification set verified by the medical image measurement model meets the preset confidence coefficient threshold value. The method for adjusting the hyper-parameters of the medical image measurement model comprises the following steps: dividing a verification set of the radiation image data set into a plurality of subsets, and selecting one subset as a model verification set and the rest subsets as model training sets; performing cross-validation, wherein in each cross-validation, the model training set is trained using the trained medical image measurement model and evaluated on the model validation set to determine performance metrics of the trained medical image measurement model; recording the determined performance index of each cross verification to obtain an independent performance evaluation result; averaging the performance indicators determined by performing cross-validation to determine an average performance assessment of the trained medical image measurement model; based on the determined average performance assessment, model hyper-parameters are selected for the trained medical image measurement model.
In a specific embodiment, the method for training a medical image measurement model for identifying osteoarthritis of the invention further comprises the steps of: s5, dividing the radiological image data set into six types, namely normal joint, suspicious narrowing of joint gap with the possibility of existence of osteophytes, obvious osteophytes with suspicious narrowing of joint gap, moderate amount of osteophytes with obvious narrowing of joint gap, hardening change of a large number of osteophyte joints, obvious narrowing of joint gap, serious hardening lesions and obvious malformation based on the width characteristic of joint gap, the intensity characteristic of subchondral bone and the convergence angle characteristic of joint lines; based on the trained medical image measurement model, inputting the image sample to be detected into the medical image measurement model outputs a corresponding probability that the input image sample belongs to each type.
In a specific embodiment, training the medical image measurement model until the loss function converges to a minimum further comprises: determining a window size and a stride of the downsampling operation based on the parameters of the maximum pooling method and the pooling layer; inserting deconvolution layers for determining window sizes and stride of up-sampling operations in the neural network model, thereby performing bilinear interpolation on the feature matrix; an activation function layer, a normalization layer and a discarding layer are inserted in the neural network model, so that part of the characteristic diagrams are randomly discarded in the training process, the risk of overfitting is reduced, and the representation capability and training stability of the model are enhanced. Calculating the distance between the output of the medical image measurement model and the labeled label by adopting the cross entropy loss function as a loss function; the distance of the trained medical image measurement model is updated by a back propagation algorithm until the loss function converges to a minimum.
In a specific embodiment, the input layer of the neural network model: the dimension d=64 of the input feature vector; the number of convolution kernels in the convolution layers is n=32, the convolution kernel size k×k=3×3, the step size s=1, and the padding p=1 of each convolution layer; pooling window size in pooling layer: p×p=2×2, step s=2; the number of encoder blocks m=4, the number of convolution kernels n=32 in each encoder block, the convolution kernel size k×k=3×3 for each convolution layer; step size s=1 for each convolutional layer, filling p=1 for each convolutional layer; regularization using batch normalization (Batch Normalization); the number of decoder blocks m=4 in the decoder; the number of deconvolution cores n=32 in each decoder block; the deconvolution kernel size of each deconvolution layer, k×k=3×3; step s=1 for each deconvolution layer; padding p=1 for each deconvolution layer; the jump connection layer establishes a jump connection between the encoder and the decoder; the output layer of the neural network model performs linear layer conversion, and the dimension of the output result is 1 (for osteoarthritis recognition prediction). Pooling layer parameters for downsampling operations: window size P x P (e.g., 2 x 2), stride S (e.g., 2); deconvolution lamination parameters for upsampling operations: the window size is K x K (e.g., 2 x 2), and the stride is S (e.g., 2); the bilinear interpolation method performs an upsampling operation. Activating a function layer: an activation function (e.g., reLU) is applied after each convolutional layer in the encoder and decoder. Normalization layer: a normalization layer (e.g., batch normalization) is inserted after the activation of the function layer to enhance training stability of the model and to help prevent overfitting. Discarding layer: a discard layer (Dropout) is inserted after the normalization layer, randomly discarding a portion of the feature map to reduce the risk of overfitting and enhance the representation capability and generalization performance of the model. Loss function: a cross entropy loss function (Cross Entropy Loss) is used to calculate the distance between the output of the medical image measurement model and the labeled label.
In a specific embodiment, the parameters of the trained medical image measurement model are updated using a back-propagation algorithm until the loss function converges to a minimum. Specifically, the Learning Rate (Learning Rate): α=0.001, batch size (batch size): b=32, training iteration number (Epochs): e=100, optimizer): adam Optimizer. In each training iteration, the back propagation algorithm is performed as follows: 1) Carrying out forward propagation calculation on the input training sample and the corresponding label data to obtain the output of the model; 2) Calculating the distance between the output result and the label by using the cross entropy loss function; 3) Calculating the gradient of counter propagation according to the loss function, and sequentially calculating the gradient of each layer through a chain rule; 4) Updating the model parameters to reduce the value of the loss function; 5) Updating weights and biases of the model according to the gradient information by using an Adam optimizer; 6) Repeating the steps until the preset training iteration times are reached.
Referring now to FIG. 2 in combination, a schematic illustration of an input image sample in this example is shown; the specific steps of inputting the joint gap width characteristic, the subchondral bone strength characteristic and the joint line convergence angle characteristic marked in the image sample batch to the characteristic extractor for executing characteristic extraction are as follows:
in the radiological image, 4 lines are provided, L1 being a line contacting the lateral curve passing from the tibia to the femur, L2 being a line contacting the femoral condyle, L3 being a line contacting the tibial plateau, and L4 being a line contacting the medial curve of the tibia and femur. The region surrounded by L1, L2, L3, and L4 is defined as a region of interest.
Defining a knee joint boundary: the positions of the 2 upward vertical lines on the L2 line of the lateral and medial portions were calculated, and the same process was performed downward on the L3 line. The perpendicular line is placed on line L3. The same operation is also performed for the perpendicular to the line L2 at 2/15CD inward from points C and D. The internal perpendicular is located at 3/20 of the external perpendicular to the L2 and L3 lines. The joint boundary is set by ODIA by marked bone edges (anterior-lateral and medial tibial plateau, distal femoral condyle) and perpendicular lines.
Calculating an average medial/lateral joint space width feature and a minimum joint space width feature: the 30 intra-articular circles between the joint boundaries are fitted, and the circle diameter is used as a calculated joint gap width feature. The smallest circle diameter in the middle or posterior is designated as the smallest joint space width feature.
Calculate the average subchondral bone strength: each x-ray film includes a calibration phantom (or wedge) of known dimensions (mm) made of aluminum (Al). The calibration phantom is identified and the intensity curve expressed in mmAl is calibrated. Four circles are placed on the joint boundary (bone); the average subchondral bone strength within those circles defined in mmAl was calculated.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. A method for training a medical image measurement model for the identification of osteoarthritis, the method comprising the steps of:
s1, acquiring a radiation image data set comprising a normal knee joint and osteoarthritis marked on the knee joint, and dividing the radiation image data set into a training set and a verification set, wherein the training set and the verification set comprise image sample batches for training the medical image measurement model;
s2, inputting the joint gap width characteristics, the subchondral bone strength characteristics and the joint line convergence angle characteristics marked in the image sample batch into a characteristic extractor to perform characteristic extraction so as to obtain a characteristic matrix vector with a preset dimension;
s3, firstly constructing a medical image measurement model, and then training the medical image measurement model by utilizing the acquired feature matrix vector until a loss function is converged to the minimum; the medical image measurement model consists of a neural network model and an attention mechanism network which are connected in parallel in a crossing way, wherein the neural network model comprises an input layer, a convolution layer, a pooling layer, an encoder, a decoder, a jump connection layer and an output layer; the feature matrix vector is simultaneously input into the neural network model and the attention mechanism network, so that a first output and a second output are obtained, the obtained first output and the obtained second output are multiplied and then linear layer conversion is carried out, and further a prediction result of the medical image measurement model for identifying osteoarthritis is obtained; the loss function is a Smooth Top1SVM () loss function for a smoothed version of the Top1SVM loss function, and the expression is:
in the above formula, L is a loss function, k and τ are SVM training parameters, s and y are input feature matrix vectors, and j is a positive integer greater than 1;
s4, testing the trained medical image measurement model based on the verification set, so as to obtain the identification result and the result confidence coefficient of the medical image measurement model for osteoarthritis; if the confidence coefficient of the obtained result meets a preset confidence coefficient threshold value, determining the trained medical image measurement model as a medical image measurement model for identifying osteoarthritis; if the obtained result confidence coefficient does not meet the preset confidence coefficient threshold value, adjusting the hyper-parameters of the medical image measurement model until the result confidence coefficient verified by the medical image measurement model meets the preset confidence coefficient threshold value;
in the step S2, the specific steps of inputting the joint space width feature, the subchondral bone strength feature and the joint line convergence angle feature marked in the image sample batch to the feature extractor to perform feature extraction are as follows:
s2.1, defining a region of interest on the radiological image data set, wherein the region of interest is a region formed by a tibia-femur line, a contact femur condyle line, a contact tibia line and a tibia-femur intermediate line;
s2.2, defining a knee joint boundary based on the defined region of interest, wherein the knee joint boundary is defined based on a contact femoral condyle line and a contact tibial line;
s2.3, calculating joint gap width characteristics based on knee joint boundaries, and fitting a plurality of joint circles on the knee joint boundaries, wherein the joint circles with the smallest diameters are used as the joint gap width characteristics;
s2.4, calculating the intensity characteristics of the subchondral bone based on the width characteristics of the joint gap determined in the step S2.3; and based on the included angle between the contact femoral condyle line and the contact tibial line, finally determining the converging angle characteristic of the joint line;
in the step S3, the specific step of training the medical image measurement model until the loss function converges to the minimum is as follows:
s3.1, the vector input layer inputs the feature matrix vector into the convolution layer to obtain a convolution result of the feature matrix; wherein the convolution layer is composed of a plurality of convolution kernels;
s3.2, performing downsampling operation on the convolution result by using a pooling layer, so as to obtain a pooling result which reduces the size of the feature map and retains the features;
s3.3, inputting the pooling result into an encoder formed by alternately stacking a convolution layer and a pooling layer, so as to gradually extract abstract features in the feature matrix vector;
s3.4, inputting the abstract features extracted step by step into a decoder composed of a plurality of deconvolution layers and up-sampling operation, so as to reconstruct the abstract features into mapping features;
s3.5, connecting the encoder with the decoder by using a jump connection layer, so as to transmit the low-level characteristic information and the high-level characteristic information of the encoder to the decoder;
s3.6, outputting a prediction result of the medical image measurement model for osteoarthritis identification based on the reconstructed mapping characteristics;
in the step S3, training the medical image measurement model until the loss function converges to a minimum further includes:
determining a window size and a stride of the downsampling operation based on a maximum pooling method and parameters of the pooling layer;
inserting deconvolution layers for determining the window size and stride of the upsampling operation in the neural network model, thereby performing bilinear interpolation on a feature matrix;
inserting an activation function layer, a normalization layer and a discarding layer in the neural network model, so that part of the feature images are randomly discarded in the training process, the risk of overfitting is reduced, and the representation capacity and training stability of the model are enhanced;
calculating a distance between the output of the medical image measurement model and the labeled label using the cross entropy loss function as a loss function; the distance of the trained medical image measurement model is updated by a back propagation algorithm until the loss function converges to a minimum.
2. The method according to claim 1, characterized in that the method further comprises the steps of:
s5, dividing the radiological image data set into six types, namely normal joint, suspicious narrowing of joint gap with the possibility of existence of osteophytes, obvious osteophytes with suspicious narrowing of joint gap, moderate amount of osteophytes with obvious narrowing of joint gap, hardening change of a large number of osteophytes, obvious narrowing of joint gap, serious hardening lesions and obvious deformity, based on the width characteristic of joint gap, the intensity characteristic of subchondral bone and the convergence angle characteristic of joint lines; based on the trained medical image measurement model, a respective probability is output that the input image samples fall under each type.
3. The method according to claim 1, wherein in the step S4, the method for adjusting the hyper-parameters of the medical image measurement model comprises:
dividing the verification set of the radiation image data set into a plurality of subsets, and selecting one subset as a model verification set and the rest subsets as model training sets;
performing cross-validation, wherein in each cross-validation, the model training set is trained with the trained medical image measurement model and evaluated on the model validation set to determine performance metrics of the trained medical image measurement model;
recording the determined performance index of each cross verification to obtain an independent performance evaluation result;
averaging the performance indicators determined by performing cross-validation to determine an average performance assessment of the trained medical image measurement model;
based on the determined average performance assessment, model hyper-parameters are selected for the trained medical image measurement model.
CN202311318617.3A 2023-10-12 2023-10-12 Method for training and identifying medical image measurement model of osteoarthritis Active CN117058149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311318617.3A CN117058149B (en) 2023-10-12 2023-10-12 Method for training and identifying medical image measurement model of osteoarthritis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311318617.3A CN117058149B (en) 2023-10-12 2023-10-12 Method for training and identifying medical image measurement model of osteoarthritis

Publications (2)

Publication Number Publication Date
CN117058149A CN117058149A (en) 2023-11-14
CN117058149B true CN117058149B (en) 2024-01-02

Family

ID=88663129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311318617.3A Active CN117058149B (en) 2023-10-12 2023-10-12 Method for training and identifying medical image measurement model of osteoarthritis

Country Status (1)

Country Link
CN (1) CN117058149B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372306B (en) * 2023-11-23 2024-03-01 山东省人工智能研究院 Pulmonary medical image enhancement method based on double encoders

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432720A (en) * 2017-10-06 2020-07-17 梅约医学教育与研究基金会 ECG-based cardiac ejection fraction screening
CN111768399A (en) * 2020-07-07 2020-10-13 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
WO2022170768A1 (en) * 2021-02-10 2022-08-18 北京长木谷医疗科技有限公司 Unicondylar joint image processing method and apparatus, device, and storage medium
CN115760770A (en) * 2022-11-18 2023-03-07 武汉轻工大学 Knee bone joint image intelligent detection method and device, electronic equipment and readable medium
CN116258726A (en) * 2023-02-23 2023-06-13 四川大学 Temporal-mandibular joint MRI image important structure segmentation method based on deep learning
CN116543221A (en) * 2023-05-12 2023-08-04 北京长木谷医疗科技股份有限公司 Intelligent detection method, device and equipment for joint pathology and readable storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150250552A1 (en) * 2014-02-08 2015-09-10 Conformis, Inc. Advanced methods of modeling knee joint kinematics and designing surgical repair systems
WO2016044352A1 (en) * 2014-09-15 2016-03-24 Conformis, Inc. 3d printing surgical repair systems
US10176642B2 (en) * 2015-07-17 2019-01-08 Bao Tran Systems and methods for computer assisted operation
US11705226B2 (en) * 2019-09-19 2023-07-18 Tempus Labs, Inc. Data based cancer research and treatment systems and methods
US20220351828A1 (en) * 2019-10-03 2022-11-03 Howmedica Osteonics Corp. Cascade of machine learning models to suggest implant components for use in orthopedic joint repair surgeries
BR112022015451A2 (en) * 2020-02-06 2022-09-27 Unigen Inc COMPOSITIONS AND METHODS FOR REGULATION OF CHONDROCYTE HOMEOSTASIS, EXTRACELLULAR MATRIX, JOINT CRTILAGE, AND ARTHRITIS PHENOTYPE
US20230139841A1 (en) * 2021-10-28 2023-05-04 Phyxd Inc. System and method for evaluating patient data
US20230186469A1 (en) * 2021-12-10 2023-06-15 Alpha Intelligence Manifolds, Inc. Methods of grading and monitoring osteoarthritis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432720A (en) * 2017-10-06 2020-07-17 梅约医学教育与研究基金会 ECG-based cardiac ejection fraction screening
CN111768399A (en) * 2020-07-07 2020-10-13 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
WO2022170768A1 (en) * 2021-02-10 2022-08-18 北京长木谷医疗科技有限公司 Unicondylar joint image processing method and apparatus, device, and storage medium
CN115760770A (en) * 2022-11-18 2023-03-07 武汉轻工大学 Knee bone joint image intelligent detection method and device, electronic equipment and readable medium
CN116258726A (en) * 2023-02-23 2023-06-13 四川大学 Temporal-mandibular joint MRI image important structure segmentation method based on deep learning
CN116543221A (en) * 2023-05-12 2023-08-04 北京长木谷医疗科技股份有限公司 Intelligent detection method, device and equipment for joint pathology and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Mohamed Yacin Sikkandar等.Automatic Detection and Classification of Human Knee Osteoarthritis Using Convolutional Neural Networks.《Computers,Materials & Continua》.2022,第4279-4289页. *
基于深度学习的膝关节炎自动诊断分级系统;邱松炜;《中国优秀硕士学位论文全文数据库医药卫生科技辑》(第02期);第E066-1254页 *

Also Published As

Publication number Publication date
CN117058149A (en) 2023-11-14

Similar Documents

Publication Publication Date Title
Morales Martinez et al. Learning osteoarthritis imaging biomarkers from bone surface spherical encoding
CN111986177A (en) Chest rib fracture detection method based on attention convolution neural network
CN117058149B (en) Method for training and identifying medical image measurement model of osteoarthritis
Liu et al. A fully automatic segmentation algorithm for CT lung images based on random forest
Zhang et al. MRLN: Multi-task relational learning network for mri vertebral localization, identification, and segmentation
Hussain et al. Deep learning-based diagnosis of disc degenerative diseases using MRI: a comprehensive review
CN114795258A (en) Child hip joint dysplasia diagnosis system
Irene et al. Segmentation and approximation of blood volume in intracranial hemorrhage patients based on computed tomography scan images using deep learning method
Bhat et al. Identification of intracranial hemorrhage using ResNeXt model
Shi et al. Automatic localization and segmentation of vertebral bodies in 3D CT volumes with deep learning
Fahmi et al. Automatic detection of brain tumor on computed tomography images for patients in the intensive care unit
Chaudhury et al. Using features from tumor subregions of breast dce-mri for estrogen receptor status prediction
Oliver et al. Automatic diagnosis of masses by using level set segmentation and shape description
Rahman et al. Detection of intracranial hemorrhage on CT scan images using convolutional neural network
CN115953416A (en) Automatic knee bone joint nuclear magnetic resonance image segmentation method based on deep learning
Chen et al. Femoral head segmentation based on improved fully convolutional neural network for ultrasound images
Alexopoulos et al. Early detection of knee osteoarthritis using deep learning on knee magnetic resonance images
Bouslimi et al. Deep Learning Based Localisation and Segmentation of Prostate Cancer from mp-MRI Images
Manikandan et al. Automated classification of emphysema using data augmentation and effective pixel location estimation with multi-scale residual network
Fang et al. A Multitarget Interested Region Extraction Method for Wrist X-Ray Images Based on Optimized AlexNet and Two-Class Combined Model
Arumugam et al. Prediction of severity of Knee Osteoarthritis on X-ray images using deep learning
Malibari et al. Gaussian Optimized Deep Learning-based Belief Classification Model for Breast Cancer Detection.
KR20210001233A (en) Method for Blood Vessel Segmentation
Shah et al. Reliable Breast Cancer Diagnosis with Deep Learning: DCGAN-Driven Mammogram Synthesis and Validity Assessment
Leyang et al. Review of method to automatic detection of COVID-19

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant